I0326 23:36:31.891273 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0326 23:36:31.891503 7 e2e.go:124] Starting e2e run "3daa3541-faa3-4693-9570-7009814c3d0d" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585265790 - Will randomize all specs Will run 275 of 4992 specs Mar 26 23:36:31.944: INFO: >>> kubeConfig: /root/.kube/config Mar 26 23:36:31.947: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 26 23:36:31.972: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 26 23:36:32.010: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 26 23:36:32.010: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 26 23:36:32.010: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 26 23:36:32.021: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 26 23:36:32.021: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 26 23:36:32.021: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Mar 26 23:36:32.022: INFO: kube-apiserver version: v1.17.0 Mar 26 23:36:32.022: INFO: >>> kubeConfig: /root/.kube/config Mar 26 23:36:32.028: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:36:32.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Mar 26 23:36:32.074: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 26 23:36:32.082: INFO: Waiting up to 5m0s for pod "downward-api-d9e71ed1-66a1-42ad-8b8d-574240333411" in namespace "downward-api-9028" to be "Succeeded or Failed" Mar 26 23:36:32.107: INFO: Pod "downward-api-d9e71ed1-66a1-42ad-8b8d-574240333411": Phase="Pending", Reason="", readiness=false. Elapsed: 25.768419ms Mar 26 23:36:34.111: INFO: Pod "downward-api-d9e71ed1-66a1-42ad-8b8d-574240333411": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029411732s Mar 26 23:36:36.115: INFO: Pod "downward-api-d9e71ed1-66a1-42ad-8b8d-574240333411": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033677634s STEP: Saw pod success Mar 26 23:36:36.115: INFO: Pod "downward-api-d9e71ed1-66a1-42ad-8b8d-574240333411" satisfied condition "Succeeded or Failed" Mar 26 23:36:36.119: INFO: Trying to get logs from node latest-worker pod downward-api-d9e71ed1-66a1-42ad-8b8d-574240333411 container dapi-container: STEP: delete the pod Mar 26 23:36:36.147: INFO: Waiting for pod downward-api-d9e71ed1-66a1-42ad-8b8d-574240333411 to disappear Mar 26 23:36:36.151: INFO: Pod downward-api-d9e71ed1-66a1-42ad-8b8d-574240333411 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:36:36.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9028" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:36:36.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:36:47.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-587" for this suite. • [SLOW TEST:11.218 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":2,"skipped":42,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:36:47.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:36:47.433: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.891861ms) Mar 26 23:36:47.436: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.925409ms) Mar 26 23:36:47.440: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.800359ms) Mar 26 23:36:47.443: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.498098ms) Mar 26 23:36:47.447: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.181448ms) Mar 26 23:36:47.450: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.193914ms) Mar 26 23:36:47.453: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.551775ms) Mar 26 23:36:47.457: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.687976ms) Mar 26 23:36:47.479: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 22.06483ms) Mar 26 23:36:47.482: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.606357ms) Mar 26 23:36:47.484: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.378346ms) Mar 26 23:36:47.487: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.667376ms) Mar 26 23:36:47.490: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.887698ms) Mar 26 23:36:47.493: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.278045ms) Mar 26 23:36:47.496: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.304374ms) Mar 26 23:36:47.499: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.615315ms) Mar 26 23:36:47.502: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.487272ms) Mar 26 23:36:47.504: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.5388ms) Mar 26 23:36:47.507: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.421474ms) Mar 26 23:36:47.515: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 8.238521ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:36:47.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3142" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":3,"skipped":49,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:36:47.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 23:36:47.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5083f83-5a7a-4c44-ac22-26912a238372" in namespace "projected-4352" to be "Succeeded or Failed" Mar 26 23:36:47.650: INFO: Pod "downwardapi-volume-c5083f83-5a7a-4c44-ac22-26912a238372": Phase="Pending", Reason="", readiness=false. Elapsed: 11.717919ms Mar 26 23:36:49.661: INFO: Pod "downwardapi-volume-c5083f83-5a7a-4c44-ac22-26912a238372": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022679068s Mar 26 23:36:51.666: INFO: Pod "downwardapi-volume-c5083f83-5a7a-4c44-ac22-26912a238372": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02707183s STEP: Saw pod success Mar 26 23:36:51.666: INFO: Pod "downwardapi-volume-c5083f83-5a7a-4c44-ac22-26912a238372" satisfied condition "Succeeded or Failed" Mar 26 23:36:51.669: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c5083f83-5a7a-4c44-ac22-26912a238372 container client-container: STEP: delete the pod Mar 26 23:36:51.706: INFO: Waiting for pod downwardapi-volume-c5083f83-5a7a-4c44-ac22-26912a238372 to disappear Mar 26 23:36:51.734: INFO: Pod downwardapi-volume-c5083f83-5a7a-4c44-ac22-26912a238372 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:36:51.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4352" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":51,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:36:51.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 26 23:36:56.356: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fe15bcf6-5032-4522-b814-f017875d4829" Mar 26 23:36:56.356: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fe15bcf6-5032-4522-b814-f017875d4829" in namespace "pods-3419" to be "terminated due to deadline exceeded" Mar 26 23:36:56.398: INFO: Pod "pod-update-activedeadlineseconds-fe15bcf6-5032-4522-b814-f017875d4829": Phase="Running", Reason="", readiness=true. Elapsed: 41.880451ms Mar 26 23:36:58.401: INFO: Pod "pod-update-activedeadlineseconds-fe15bcf6-5032-4522-b814-f017875d4829": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.045474353s Mar 26 23:36:58.401: INFO: Pod "pod-update-activedeadlineseconds-fe15bcf6-5032-4522-b814-f017875d4829" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:36:58.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3419" for this suite. • [SLOW TEST:6.668 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":57,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:36:58.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-2cce93e5-5a06-4e9c-b63e-f56e1aff6782 Mar 26 23:36:58.531: INFO: Pod name my-hostname-basic-2cce93e5-5a06-4e9c-b63e-f56e1aff6782: Found 0 pods out of 1 Mar 26 23:37:03.534: INFO: Pod name my-hostname-basic-2cce93e5-5a06-4e9c-b63e-f56e1aff6782: Found 1 pods out of 1 Mar 26 23:37:03.535: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2cce93e5-5a06-4e9c-b63e-f56e1aff6782" are running Mar 26 23:37:03.537: INFO: Pod "my-hostname-basic-2cce93e5-5a06-4e9c-b63e-f56e1aff6782-vfhs6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-26 23:36:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-26 23:37:01 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-26 23:37:01 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-26 23:36:58 +0000 UTC Reason: Message:}]) Mar 26 23:37:03.537: INFO: Trying to dial the pod Mar 26 23:37:08.549: INFO: Controller my-hostname-basic-2cce93e5-5a06-4e9c-b63e-f56e1aff6782: Got expected result from replica 1 [my-hostname-basic-2cce93e5-5a06-4e9c-b63e-f56e1aff6782-vfhs6]: "my-hostname-basic-2cce93e5-5a06-4e9c-b63e-f56e1aff6782-vfhs6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:37:08.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4997" for this suite. • [SLOW TEST:10.148 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":6,"skipped":63,"failed":0} SSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:37:08.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-3cc4583c-28c8-4a6d-85e2-7d6bdb1cba3b STEP: Creating secret with name secret-projected-all-test-volume-13417f13-927f-49be-8c65-a36d2cd312d5 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 26 23:37:08.666: INFO: Waiting up to 5m0s for pod "projected-volume-7b3ca7b6-65d2-46a9-8d92-ef2e8f59cf94" in namespace "projected-5659" to be "Succeeded or Failed" Mar 26 23:37:08.682: INFO: Pod "projected-volume-7b3ca7b6-65d2-46a9-8d92-ef2e8f59cf94": Phase="Pending", Reason="", readiness=false. Elapsed: 16.059125ms Mar 26 23:37:10.686: INFO: Pod "projected-volume-7b3ca7b6-65d2-46a9-8d92-ef2e8f59cf94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019977787s Mar 26 23:37:12.690: INFO: Pod "projected-volume-7b3ca7b6-65d2-46a9-8d92-ef2e8f59cf94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024312958s STEP: Saw pod success Mar 26 23:37:12.691: INFO: Pod "projected-volume-7b3ca7b6-65d2-46a9-8d92-ef2e8f59cf94" satisfied condition "Succeeded or Failed" Mar 26 23:37:12.694: INFO: Trying to get logs from node latest-worker pod projected-volume-7b3ca7b6-65d2-46a9-8d92-ef2e8f59cf94 container projected-all-volume-test: STEP: delete the pod Mar 26 23:37:12.724: INFO: Waiting for pod projected-volume-7b3ca7b6-65d2-46a9-8d92-ef2e8f59cf94 to disappear Mar 26 23:37:12.739: INFO: Pod projected-volume-7b3ca7b6-65d2-46a9-8d92-ef2e8f59cf94 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:37:12.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5659" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":7,"skipped":68,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:37:12.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3002 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 26 23:37:12.842: INFO: Found 0 stateful pods, waiting for 3 Mar 26 23:37:22.846: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 26 23:37:22.846: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 26 23:37:22.846: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 26 23:37:22.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3002 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 26 23:37:25.305: INFO: stderr: "I0326 23:37:25.176702 32 log.go:172] (0xc000a10000) (0xc000774000) Create stream\nI0326 23:37:25.176772 32 log.go:172] (0xc000a10000) (0xc000774000) Stream added, broadcasting: 1\nI0326 23:37:25.180137 32 log.go:172] (0xc000a10000) Reply frame received for 1\nI0326 23:37:25.180180 32 log.go:172] (0xc000a10000) (0xc0007740a0) Create stream\nI0326 23:37:25.180194 32 log.go:172] (0xc000a10000) (0xc0007740a0) Stream added, broadcasting: 3\nI0326 23:37:25.181494 32 log.go:172] (0xc000a10000) Reply frame received for 3\nI0326 23:37:25.181545 32 log.go:172] (0xc000a10000) (0xc0007c8000) Create stream\nI0326 23:37:25.181569 32 log.go:172] (0xc000a10000) (0xc0007c8000) Stream added, broadcasting: 5\nI0326 23:37:25.182821 32 log.go:172] (0xc000a10000) Reply frame received for 5\nI0326 23:37:25.261896 32 log.go:172] (0xc000a10000) Data frame received for 5\nI0326 23:37:25.261924 32 log.go:172] (0xc0007c8000) (5) Data frame handling\nI0326 23:37:25.261942 32 log.go:172] (0xc0007c8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0326 23:37:25.296664 32 log.go:172] (0xc000a10000) Data frame received for 3\nI0326 23:37:25.296741 32 log.go:172] (0xc0007740a0) (3) Data frame handling\nI0326 23:37:25.296752 32 log.go:172] (0xc0007740a0) (3) Data frame sent\nI0326 23:37:25.297085 32 log.go:172] (0xc000a10000) Data frame received for 5\nI0326 23:37:25.297141 32 log.go:172] (0xc0007c8000) (5) Data frame handling\nI0326 23:37:25.297522 32 log.go:172] (0xc000a10000) Data frame received for 3\nI0326 23:37:25.297551 32 log.go:172] (0xc0007740a0) (3) Data frame handling\nI0326 23:37:25.299803 32 log.go:172] (0xc000a10000) Data frame received for 1\nI0326 23:37:25.299814 32 log.go:172] (0xc000774000) (1) Data frame handling\nI0326 23:37:25.299834 32 log.go:172] (0xc000774000) (1) Data frame sent\nI0326 23:37:25.299845 32 log.go:172] (0xc000a10000) (0xc000774000) Stream removed, broadcasting: 1\nI0326 23:37:25.299974 32 log.go:172] (0xc000a10000) Go away received\nI0326 23:37:25.300070 32 log.go:172] (0xc000a10000) (0xc000774000) Stream removed, broadcasting: 1\nI0326 23:37:25.300082 32 log.go:172] (0xc000a10000) (0xc0007740a0) Stream removed, broadcasting: 3\nI0326 23:37:25.300088 32 log.go:172] (0xc000a10000) (0xc0007c8000) Stream removed, broadcasting: 5\n" Mar 26 23:37:25.305: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 26 23:37:25.305: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 26 23:37:35.337: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 26 23:37:45.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3002 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 26 23:37:45.601: INFO: stderr: "I0326 23:37:45.519930 67 log.go:172] (0xc0006cc0b0) (0xc000671720) Create stream\nI0326 23:37:45.519979 67 log.go:172] (0xc0006cc0b0) (0xc000671720) Stream added, broadcasting: 1\nI0326 23:37:45.524144 67 log.go:172] (0xc0006cc0b0) Reply frame received for 1\nI0326 23:37:45.524189 67 log.go:172] (0xc0006cc0b0) (0xc0005717c0) Create stream\nI0326 23:37:45.524203 67 log.go:172] (0xc0006cc0b0) (0xc0005717c0) Stream added, broadcasting: 3\nI0326 23:37:45.525217 67 log.go:172] (0xc0006cc0b0) Reply frame received for 3\nI0326 23:37:45.525243 67 log.go:172] (0xc0006cc0b0) (0xc0006717c0) Create stream\nI0326 23:37:45.525251 67 log.go:172] (0xc0006cc0b0) (0xc0006717c0) Stream added, broadcasting: 5\nI0326 23:37:45.526221 67 log.go:172] (0xc0006cc0b0) Reply frame received for 5\nI0326 23:37:45.595308 67 log.go:172] (0xc0006cc0b0) Data frame received for 5\nI0326 23:37:45.595349 67 log.go:172] (0xc0006717c0) (5) Data frame handling\nI0326 23:37:45.595365 67 log.go:172] (0xc0006717c0) (5) Data frame sent\nI0326 23:37:45.595380 67 log.go:172] (0xc0006cc0b0) Data frame received for 5\nI0326 23:37:45.595391 67 log.go:172] (0xc0006717c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0326 23:37:45.595439 67 log.go:172] (0xc0006cc0b0) Data frame received for 3\nI0326 23:37:45.595472 67 log.go:172] (0xc0005717c0) (3) Data frame handling\nI0326 23:37:45.595500 67 log.go:172] (0xc0005717c0) (3) Data frame sent\nI0326 23:37:45.595521 67 log.go:172] (0xc0006cc0b0) Data frame received for 3\nI0326 23:37:45.595542 67 log.go:172] (0xc0005717c0) (3) Data frame handling\nI0326 23:37:45.597087 67 log.go:172] (0xc0006cc0b0) Data frame received for 1\nI0326 23:37:45.597103 67 log.go:172] (0xc000671720) (1) Data frame handling\nI0326 23:37:45.597189 67 log.go:172] (0xc000671720) (1) Data frame sent\nI0326 23:37:45.597206 67 log.go:172] (0xc0006cc0b0) (0xc000671720) Stream removed, broadcasting: 1\nI0326 23:37:45.597285 67 log.go:172] (0xc0006cc0b0) Go away received\nI0326 23:37:45.597419 67 log.go:172] (0xc0006cc0b0) (0xc000671720) Stream removed, broadcasting: 1\nI0326 23:37:45.597433 67 log.go:172] (0xc0006cc0b0) (0xc0005717c0) Stream removed, broadcasting: 3\nI0326 23:37:45.597440 67 log.go:172] (0xc0006cc0b0) (0xc0006717c0) Stream removed, broadcasting: 5\n" Mar 26 23:37:45.602: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 26 23:37:45.602: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision Mar 26 23:38:05.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3002 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 26 23:38:05.853: INFO: stderr: "I0326 23:38:05.753588 88 log.go:172] (0xc000a26790) (0xc0009a4500) Create stream\nI0326 23:38:05.754277 88 log.go:172] (0xc000a26790) (0xc0009a4500) Stream added, broadcasting: 1\nI0326 23:38:05.761563 88 log.go:172] (0xc000a26790) Reply frame received for 1\nI0326 23:38:05.761595 88 log.go:172] (0xc000a26790) (0xc000517680) Create stream\nI0326 23:38:05.761604 88 log.go:172] (0xc000a26790) (0xc000517680) Stream added, broadcasting: 3\nI0326 23:38:05.762525 88 log.go:172] (0xc000a26790) Reply frame received for 3\nI0326 23:38:05.762576 88 log.go:172] (0xc000a26790) (0xc0003d4aa0) Create stream\nI0326 23:38:05.762595 88 log.go:172] (0xc000a26790) (0xc0003d4aa0) Stream added, broadcasting: 5\nI0326 23:38:05.763425 88 log.go:172] (0xc000a26790) Reply frame received for 5\nI0326 23:38:05.815782 88 log.go:172] (0xc000a26790) Data frame received for 5\nI0326 23:38:05.815815 88 log.go:172] (0xc0003d4aa0) (5) Data frame handling\nI0326 23:38:05.815836 88 log.go:172] (0xc0003d4aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0326 23:38:05.847856 88 log.go:172] (0xc000a26790) Data frame received for 3\nI0326 23:38:05.847916 88 log.go:172] (0xc000517680) (3) Data frame handling\nI0326 23:38:05.847938 88 log.go:172] (0xc000517680) (3) Data frame sent\nI0326 23:38:05.847961 88 log.go:172] (0xc000a26790) Data frame received for 3\nI0326 23:38:05.847982 88 log.go:172] (0xc000517680) (3) Data frame handling\nI0326 23:38:05.848033 88 log.go:172] (0xc000a26790) Data frame received for 5\nI0326 23:38:05.848075 88 log.go:172] (0xc0003d4aa0) (5) Data frame handling\nI0326 23:38:05.849828 88 log.go:172] (0xc000a26790) Data frame received for 1\nI0326 23:38:05.849840 88 log.go:172] (0xc0009a4500) (1) Data frame handling\nI0326 23:38:05.849858 88 log.go:172] (0xc0009a4500) (1) Data frame sent\nI0326 23:38:05.849866 88 log.go:172] (0xc000a26790) (0xc0009a4500) Stream removed, broadcasting: 1\nI0326 23:38:05.850083 88 log.go:172] (0xc000a26790) (0xc0009a4500) Stream removed, broadcasting: 1\nI0326 23:38:05.850094 88 log.go:172] (0xc000a26790) (0xc000517680) Stream removed, broadcasting: 3\nI0326 23:38:05.850169 88 log.go:172] (0xc000a26790) Go away received\nI0326 23:38:05.850196 88 log.go:172] (0xc000a26790) (0xc0003d4aa0) Stream removed, broadcasting: 5\n" Mar 26 23:38:05.853: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 26 23:38:05.853: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 26 23:38:15.882: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 26 23:38:25.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3002 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 26 23:38:26.140: INFO: stderr: "I0326 23:38:26.034288 109 log.go:172] (0xc0000e8e70) (0xc0009300a0) Create stream\nI0326 23:38:26.034337 109 log.go:172] (0xc0000e8e70) (0xc0009300a0) Stream added, broadcasting: 1\nI0326 23:38:26.036561 109 log.go:172] (0xc0000e8e70) Reply frame received for 1\nI0326 23:38:26.036581 109 log.go:172] (0xc0000e8e70) (0xc000717360) Create stream\nI0326 23:38:26.036588 109 log.go:172] (0xc0000e8e70) (0xc000717360) Stream added, broadcasting: 3\nI0326 23:38:26.037708 109 log.go:172] (0xc0000e8e70) Reply frame received for 3\nI0326 23:38:26.037762 109 log.go:172] (0xc0000e8e70) (0xc0009301e0) Create stream\nI0326 23:38:26.037790 109 log.go:172] (0xc0000e8e70) (0xc0009301e0) Stream added, broadcasting: 5\nI0326 23:38:26.038701 109 log.go:172] (0xc0000e8e70) Reply frame received for 5\nI0326 23:38:26.127453 109 log.go:172] (0xc0000e8e70) Data frame received for 5\nI0326 23:38:26.127486 109 log.go:172] (0xc0009301e0) (5) Data frame handling\nI0326 23:38:26.127510 109 log.go:172] (0xc0009301e0) (5) Data frame sent\nI0326 23:38:26.127523 109 log.go:172] (0xc0000e8e70) Data frame received for 5\nI0326 23:38:26.127535 109 log.go:172] (0xc0009301e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0326 23:38:26.128250 109 log.go:172] (0xc0000e8e70) Data frame received for 3\nI0326 23:38:26.128281 109 log.go:172] (0xc000717360) (3) Data frame handling\nI0326 23:38:26.128308 109 log.go:172] (0xc000717360) (3) Data frame sent\nI0326 23:38:26.128336 109 log.go:172] (0xc0000e8e70) Data frame received for 3\nI0326 23:38:26.128351 109 log.go:172] (0xc000717360) (3) Data frame handling\nI0326 23:38:26.136244 109 log.go:172] (0xc0000e8e70) Data frame received for 1\nI0326 23:38:26.136270 109 log.go:172] (0xc0009300a0) (1) Data frame handling\nI0326 23:38:26.136284 109 log.go:172] (0xc0009300a0) (1) Data frame sent\nI0326 23:38:26.136302 109 log.go:172] (0xc0000e8e70) (0xc0009300a0) Stream removed, broadcasting: 1\nI0326 23:38:26.136315 109 log.go:172] (0xc0000e8e70) Go away received\nI0326 23:38:26.136699 109 log.go:172] (0xc0000e8e70) (0xc0009300a0) Stream removed, broadcasting: 1\nI0326 23:38:26.136721 109 log.go:172] (0xc0000e8e70) (0xc000717360) Stream removed, broadcasting: 3\nI0326 23:38:26.136731 109 log.go:172] (0xc0000e8e70) (0xc0009301e0) Stream removed, broadcasting: 5\n" Mar 26 23:38:26.140: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 26 23:38:26.140: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 26 23:38:46.159: INFO: Deleting all statefulset in ns statefulset-3002 Mar 26 23:38:46.162: INFO: Scaling statefulset ss2 to 0 Mar 26 23:39:06.179: INFO: Waiting for statefulset status.replicas updated to 0 Mar 26 23:39:06.182: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:39:06.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3002" for this suite. • [SLOW TEST:113.459 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":8,"skipped":71,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:39:06.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:39:06.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Mar 26 23:39:06.480: INFO: stderr: "" Mar 26 23:39:06.480: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:39:06.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2905" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":9,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:39:06.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:39:06.599: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b7bed802-44d6-4767-858a-befa2f5e441b", Controller:(*bool)(0xc002843426), BlockOwnerDeletion:(*bool)(0xc002843427)}} Mar 26 23:39:06.629: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"98561fc1-6efa-4d79-a0d0-492e6e923dc9", Controller:(*bool)(0xc00273cc22), BlockOwnerDeletion:(*bool)(0xc00273cc23)}} Mar 26 23:39:06.640: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9667ba45-e557-460d-b9f9-11e8870e5845", Controller:(*bool)(0xc0029e4cf6), BlockOwnerDeletion:(*bool)(0xc0029e4cf7)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:39:11.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-269" for this suite. • [SLOW TEST:5.202 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":10,"skipped":111,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:39:11.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 23:39:12.713: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 23:39:14.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720862752, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720862752, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720862752, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720862752, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 26 23:39:17.747: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:39:18.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3376" for this suite. STEP: Destroying namespace "webhook-3376-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.598 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":11,"skipped":122,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:39:18.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Mar 26 23:39:23.186: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6183 pod-service-account-7da45083-a6f9-4fff-9434-852bb7574821 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 26 23:39:23.420: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6183 pod-service-account-7da45083-a6f9-4fff-9434-852bb7574821 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 26 23:39:23.623: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6183 pod-service-account-7da45083-a6f9-4fff-9434-852bb7574821 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:39:23.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6183" for this suite. • [SLOW TEST:5.538 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":12,"skipped":129,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:39:23.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-6e595f17-e0af-4823-b9cf-3a93101eea4c STEP: Creating a pod to test consume secrets Mar 26 23:39:23.921: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-50b46478-945f-43fb-bdce-9b24259613a9" in namespace "projected-2336" to be "Succeeded or Failed" Mar 26 23:39:23.940: INFO: Pod "pod-projected-secrets-50b46478-945f-43fb-bdce-9b24259613a9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.12577ms Mar 26 23:39:25.944: INFO: Pod "pod-projected-secrets-50b46478-945f-43fb-bdce-9b24259613a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022533027s Mar 26 23:39:27.947: INFO: Pod "pod-projected-secrets-50b46478-945f-43fb-bdce-9b24259613a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025916612s STEP: Saw pod success Mar 26 23:39:27.947: INFO: Pod "pod-projected-secrets-50b46478-945f-43fb-bdce-9b24259613a9" satisfied condition "Succeeded or Failed" Mar 26 23:39:27.950: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-50b46478-945f-43fb-bdce-9b24259613a9 container projected-secret-volume-test: STEP: delete the pod Mar 26 23:39:28.023: INFO: Waiting for pod pod-projected-secrets-50b46478-945f-43fb-bdce-9b24259613a9 to disappear Mar 26 23:39:28.030: INFO: Pod pod-projected-secrets-50b46478-945f-43fb-bdce-9b24259613a9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:39:28.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2336" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":131,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:39:28.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-4da9a03c-3036-41a7-8b59-c3abb104db6e in namespace container-probe-6403 Mar 26 23:39:32.111: INFO: Started pod test-webserver-4da9a03c-3036-41a7-8b59-c3abb104db6e in namespace container-probe-6403 STEP: checking the pod's current state and verifying that restartCount is present Mar 26 23:39:32.114: INFO: Initial restart count of pod test-webserver-4da9a03c-3036-41a7-8b59-c3abb104db6e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:43:32.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6403" for this suite. • [SLOW TEST:244.797 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:43:32.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7142.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7142.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7142.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7142.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7142.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7142.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 26 23:43:39.101: INFO: DNS probes using dns-7142/dns-test-c1d4deab-2d01-4f5a-a6f2-b419eac448da succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:43:39.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7142" for this suite. • [SLOW TEST:6.379 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":15,"skipped":193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:43:39.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:43:43.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6427" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":240,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:43:43.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 26 23:43:49.580: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5435 PodName:pod-sharedvolume-253b9999-186e-4a1d-8f34-9a2ba65a31c1 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 26 23:43:49.580: INFO: >>> kubeConfig: /root/.kube/config I0326 23:43:49.613391 7 log.go:172] (0xc002c0a630) (0xc00191c280) Create stream I0326 23:43:49.613430 7 log.go:172] (0xc002c0a630) (0xc00191c280) Stream added, broadcasting: 1 I0326 23:43:49.616785 7 log.go:172] (0xc002c0a630) Reply frame received for 1 I0326 23:43:49.616833 7 log.go:172] (0xc002c0a630) (0xc000fd9c20) Create stream I0326 23:43:49.616849 7 log.go:172] (0xc002c0a630) (0xc000fd9c20) Stream added, broadcasting: 3 I0326 23:43:49.618298 7 log.go:172] (0xc002c0a630) Reply frame received for 3 I0326 23:43:49.618346 7 log.go:172] (0xc002c0a630) (0xc0013520a0) Create stream I0326 23:43:49.618375 7 log.go:172] (0xc002c0a630) (0xc0013520a0) Stream added, broadcasting: 5 I0326 23:43:49.619422 7 log.go:172] (0xc002c0a630) Reply frame received for 5 I0326 23:43:49.681885 7 log.go:172] (0xc002c0a630) Data frame received for 5 I0326 23:43:49.681930 7 log.go:172] (0xc0013520a0) (5) Data frame handling I0326 23:43:49.681963 7 log.go:172] (0xc002c0a630) Data frame received for 3 I0326 23:43:49.681976 7 log.go:172] (0xc000fd9c20) (3) Data frame handling I0326 23:43:49.681999 7 log.go:172] (0xc000fd9c20) (3) Data frame sent I0326 23:43:49.682013 7 log.go:172] (0xc002c0a630) Data frame received for 3 I0326 23:43:49.682024 7 log.go:172] (0xc000fd9c20) (3) Data frame handling I0326 23:43:49.683459 7 log.go:172] (0xc002c0a630) Data frame received for 1 I0326 23:43:49.683489 7 log.go:172] (0xc00191c280) (1) Data frame handling I0326 23:43:49.683526 7 log.go:172] (0xc00191c280) (1) Data frame sent I0326 23:43:49.683553 7 log.go:172] (0xc002c0a630) (0xc00191c280) Stream removed, broadcasting: 1 I0326 23:43:49.683589 7 log.go:172] (0xc002c0a630) Go away received I0326 23:43:49.683978 7 log.go:172] (0xc002c0a630) (0xc00191c280) Stream removed, broadcasting: 1 I0326 23:43:49.684000 7 log.go:172] (0xc002c0a630) (0xc000fd9c20) Stream removed, broadcasting: 3 I0326 23:43:49.684017 7 log.go:172] (0xc002c0a630) (0xc0013520a0) Stream removed, broadcasting: 5 Mar 26 23:43:49.684: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:43:49.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5435" for this suite. • [SLOW TEST:6.218 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":17,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:43:49.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-2da34588-c07e-4354-9a8f-68a1127c256f STEP: Creating a pod to test consume secrets Mar 26 23:43:49.771: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-479bde6e-5e69-4297-ab12-8e557ad24368" in namespace "projected-854" to be "Succeeded or Failed" Mar 26 23:43:49.774: INFO: Pod "pod-projected-secrets-479bde6e-5e69-4297-ab12-8e557ad24368": Phase="Pending", Reason="", readiness=false. Elapsed: 3.514669ms Mar 26 23:43:51.779: INFO: Pod "pod-projected-secrets-479bde6e-5e69-4297-ab12-8e557ad24368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008531352s Mar 26 23:43:53.783: INFO: Pod "pod-projected-secrets-479bde6e-5e69-4297-ab12-8e557ad24368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012087448s STEP: Saw pod success Mar 26 23:43:53.783: INFO: Pod "pod-projected-secrets-479bde6e-5e69-4297-ab12-8e557ad24368" satisfied condition "Succeeded or Failed" Mar 26 23:43:53.785: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-479bde6e-5e69-4297-ab12-8e557ad24368 container projected-secret-volume-test: STEP: delete the pod Mar 26 23:43:53.818: INFO: Waiting for pod pod-projected-secrets-479bde6e-5e69-4297-ab12-8e557ad24368 to disappear Mar 26 23:43:53.892: INFO: Pod pod-projected-secrets-479bde6e-5e69-4297-ab12-8e557ad24368 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:43:53.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-854" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":299,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:43:53.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 26 23:43:58.484: INFO: Successfully updated pod "pod-update-934a8a23-1b07-4184-a5a1-976b7726958a" STEP: verifying the updated pod is in kubernetes Mar 26 23:43:58.509: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:43:58.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9879" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":308,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:43:58.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-7ef36ba1-3103-4d53-b35b-2fb521f80d2c STEP: Creating a pod to test consume secrets Mar 26 23:43:58.628: INFO: Waiting up to 5m0s for pod "pod-secrets-d60f28ad-84fb-4b63-a6fd-51bc9b5bd1b6" in namespace "secrets-3524" to be "Succeeded or Failed" Mar 26 23:43:58.633: INFO: Pod "pod-secrets-d60f28ad-84fb-4b63-a6fd-51bc9b5bd1b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25349ms Mar 26 23:44:00.638: INFO: Pod "pod-secrets-d60f28ad-84fb-4b63-a6fd-51bc9b5bd1b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009373157s Mar 26 23:44:02.641: INFO: Pod "pod-secrets-d60f28ad-84fb-4b63-a6fd-51bc9b5bd1b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0130674s STEP: Saw pod success Mar 26 23:44:02.641: INFO: Pod "pod-secrets-d60f28ad-84fb-4b63-a6fd-51bc9b5bd1b6" satisfied condition "Succeeded or Failed" Mar 26 23:44:02.644: INFO: Trying to get logs from node latest-worker pod pod-secrets-d60f28ad-84fb-4b63-a6fd-51bc9b5bd1b6 container secret-volume-test: STEP: delete the pod Mar 26 23:44:02.687: INFO: Waiting for pod pod-secrets-d60f28ad-84fb-4b63-a6fd-51bc9b5bd1b6 to disappear Mar 26 23:44:02.692: INFO: Pod pod-secrets-d60f28ad-84fb-4b63-a6fd-51bc9b5bd1b6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:44:02.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3524" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":315,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:44:02.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:44:02.789: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:44:09.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1417" for this suite. • [SLOW TEST:6.395 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":21,"skipped":317,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:44:09.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 26 23:44:09.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9566' Mar 26 23:44:09.240: INFO: stderr: "" Mar 26 23:44:09.240: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 26 23:44:14.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9566 -o json' Mar 26 23:44:14.386: INFO: stderr: "" Mar 26 23:44:14.386: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-26T23:44:09Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9566\",\n \"resourceVersion\": \"3064961\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9566/pods/e2e-test-httpd-pod\",\n \"uid\": \"71819767-294c-48b7-b489-b6e9eb341a8a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-hjjmq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-hjjmq\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-hjjmq\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-26T23:44:09Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-26T23:44:11Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-26T23:44:11Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-26T23:44:09Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://99b7d99badf5c0f7a8ca06ef117a168487cedca7970327b44ec73f4f1f9f8d6a\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-26T23:44:11Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.165\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.165\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-26T23:44:09Z\"\n }\n}\n" STEP: replace the image in the pod Mar 26 23:44:14.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9566' Mar 26 23:44:14.693: INFO: stderr: "" Mar 26 23:44:14.693: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 26 23:44:14.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9566' Mar 26 23:44:23.013: INFO: stderr: "" Mar 26 23:44:23.013: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:44:23.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9566" for this suite. • [SLOW TEST:13.937 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":22,"skipped":325,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:44:23.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Mar 26 23:44:23.091: INFO: Waiting up to 5m0s for pod "var-expansion-4dad3772-01d2-4a2e-b036-20878850380e" in namespace "var-expansion-6445" to be "Succeeded or Failed" Mar 26 23:44:23.095: INFO: Pod "var-expansion-4dad3772-01d2-4a2e-b036-20878850380e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.499161ms Mar 26 23:44:25.102: INFO: Pod "var-expansion-4dad3772-01d2-4a2e-b036-20878850380e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011277274s Mar 26 23:44:27.107: INFO: Pod "var-expansion-4dad3772-01d2-4a2e-b036-20878850380e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0155977s STEP: Saw pod success Mar 26 23:44:27.107: INFO: Pod "var-expansion-4dad3772-01d2-4a2e-b036-20878850380e" satisfied condition "Succeeded or Failed" Mar 26 23:44:27.110: INFO: Trying to get logs from node latest-worker2 pod var-expansion-4dad3772-01d2-4a2e-b036-20878850380e container dapi-container: STEP: delete the pod Mar 26 23:44:27.140: INFO: Waiting for pod var-expansion-4dad3772-01d2-4a2e-b036-20878850380e to disappear Mar 26 23:44:27.152: INFO: Pod var-expansion-4dad3772-01d2-4a2e-b036-20878850380e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:44:27.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6445" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":331,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:44:27.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-97f78bda-13c5-4a97-9f84-df47876e2f4a STEP: Creating a pod to test consume configMaps Mar 26 23:44:27.239: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64623fae-bd1c-4589-babe-7a4a7ecdd1e9" in namespace "projected-5467" to be "Succeeded or Failed" Mar 26 23:44:27.242: INFO: Pod "pod-projected-configmaps-64623fae-bd1c-4589-babe-7a4a7ecdd1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.588818ms Mar 26 23:44:29.247: INFO: Pod "pod-projected-configmaps-64623fae-bd1c-4589-babe-7a4a7ecdd1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008154209s Mar 26 23:44:31.251: INFO: Pod "pod-projected-configmaps-64623fae-bd1c-4589-babe-7a4a7ecdd1e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012262963s STEP: Saw pod success Mar 26 23:44:31.251: INFO: Pod "pod-projected-configmaps-64623fae-bd1c-4589-babe-7a4a7ecdd1e9" satisfied condition "Succeeded or Failed" Mar 26 23:44:31.255: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-64623fae-bd1c-4589-babe-7a4a7ecdd1e9 container projected-configmap-volume-test: STEP: delete the pod Mar 26 23:44:31.332: INFO: Waiting for pod pod-projected-configmaps-64623fae-bd1c-4589-babe-7a4a7ecdd1e9 to disappear Mar 26 23:44:31.338: INFO: Pod pod-projected-configmaps-64623fae-bd1c-4589-babe-7a4a7ecdd1e9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:44:31.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5467" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:44:31.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-72e520ef-4f41-4d38-a8b0-127de40ea0a7 STEP: Creating a pod to test consume configMaps Mar 26 23:44:31.428: INFO: Waiting up to 5m0s for pod "pod-configmaps-306f0268-88a7-4ae6-bb89-a8b4f2ab6c6b" in namespace "configmap-2798" to be "Succeeded or Failed" Mar 26 23:44:31.455: INFO: Pod "pod-configmaps-306f0268-88a7-4ae6-bb89-a8b4f2ab6c6b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.194012ms Mar 26 23:44:33.459: INFO: Pod "pod-configmaps-306f0268-88a7-4ae6-bb89-a8b4f2ab6c6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031259059s Mar 26 23:44:35.463: INFO: Pod "pod-configmaps-306f0268-88a7-4ae6-bb89-a8b4f2ab6c6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035227036s STEP: Saw pod success Mar 26 23:44:35.463: INFO: Pod "pod-configmaps-306f0268-88a7-4ae6-bb89-a8b4f2ab6c6b" satisfied condition "Succeeded or Failed" Mar 26 23:44:35.466: INFO: Trying to get logs from node latest-worker pod pod-configmaps-306f0268-88a7-4ae6-bb89-a8b4f2ab6c6b container configmap-volume-test: STEP: delete the pod Mar 26 23:44:35.496: INFO: Waiting for pod pod-configmaps-306f0268-88a7-4ae6-bb89-a8b4f2ab6c6b to disappear Mar 26 23:44:35.506: INFO: Pod pod-configmaps-306f0268-88a7-4ae6-bb89-a8b4f2ab6c6b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:44:35.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2798" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":431,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:44:35.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 26 23:44:39.641: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:44:39.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3379" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":443,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:44:39.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-1342 STEP: creating replication controller nodeport-test in namespace services-1342 I0326 23:44:39.838382 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1342, replica count: 2 I0326 23:44:42.888829 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 23:44:45.889082 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 26 23:44:45.889: INFO: Creating new exec pod Mar 26 23:44:50.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1342 execpodgqjdg -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 26 23:44:51.136: INFO: stderr: "I0326 23:44:51.051182 296 log.go:172] (0xc0005b0a50) (0xc00057a140) Create stream\nI0326 23:44:51.051234 296 log.go:172] (0xc0005b0a50) (0xc00057a140) Stream added, broadcasting: 1\nI0326 23:44:51.055387 296 log.go:172] (0xc0005b0a50) Reply frame received for 1\nI0326 23:44:51.055436 296 log.go:172] (0xc0005b0a50) (0xc00079f2c0) Create stream\nI0326 23:44:51.055450 296 log.go:172] (0xc0005b0a50) (0xc00079f2c0) Stream added, broadcasting: 3\nI0326 23:44:51.056824 296 log.go:172] (0xc0005b0a50) Reply frame received for 3\nI0326 23:44:51.056903 296 log.go:172] (0xc0005b0a50) (0xc000470000) Create stream\nI0326 23:44:51.056936 296 log.go:172] (0xc0005b0a50) (0xc000470000) Stream added, broadcasting: 5\nI0326 23:44:51.058362 296 log.go:172] (0xc0005b0a50) Reply frame received for 5\nI0326 23:44:51.130517 296 log.go:172] (0xc0005b0a50) Data frame received for 5\nI0326 23:44:51.130559 296 log.go:172] (0xc000470000) (5) Data frame handling\nI0326 23:44:51.130577 296 log.go:172] (0xc000470000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0326 23:44:51.130909 296 log.go:172] (0xc0005b0a50) Data frame received for 5\nI0326 23:44:51.130949 296 log.go:172] (0xc000470000) (5) Data frame handling\nI0326 23:44:51.130973 296 log.go:172] (0xc000470000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0326 23:44:51.131277 296 log.go:172] (0xc0005b0a50) Data frame received for 3\nI0326 23:44:51.131289 296 log.go:172] (0xc00079f2c0) (3) Data frame handling\nI0326 23:44:51.131410 296 log.go:172] (0xc0005b0a50) Data frame received for 5\nI0326 23:44:51.131433 296 log.go:172] (0xc000470000) (5) Data frame handling\nI0326 23:44:51.133010 296 log.go:172] (0xc0005b0a50) Data frame received for 1\nI0326 23:44:51.133028 296 log.go:172] (0xc00057a140) (1) Data frame handling\nI0326 23:44:51.133050 296 log.go:172] (0xc00057a140) (1) Data frame sent\nI0326 23:44:51.133065 296 log.go:172] (0xc0005b0a50) (0xc00057a140) Stream removed, broadcasting: 1\nI0326 23:44:51.133406 296 log.go:172] (0xc0005b0a50) (0xc00057a140) Stream removed, broadcasting: 1\nI0326 23:44:51.133419 296 log.go:172] (0xc0005b0a50) (0xc00079f2c0) Stream removed, broadcasting: 3\nI0326 23:44:51.133569 296 log.go:172] (0xc0005b0a50) (0xc000470000) Stream removed, broadcasting: 5\n" Mar 26 23:44:51.137: INFO: stdout: "" Mar 26 23:44:51.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1342 execpodgqjdg -- /bin/sh -x -c nc -zv -t -w 2 10.96.89.149 80' Mar 26 23:44:51.335: INFO: stderr: "I0326 23:44:51.262672 318 log.go:172] (0xc0005e6790) (0xc000641400) Create stream\nI0326 23:44:51.262729 318 log.go:172] (0xc0005e6790) (0xc000641400) Stream added, broadcasting: 1\nI0326 23:44:51.265096 318 log.go:172] (0xc0005e6790) Reply frame received for 1\nI0326 23:44:51.265138 318 log.go:172] (0xc0005e6790) (0xc00056d540) Create stream\nI0326 23:44:51.265152 318 log.go:172] (0xc0005e6790) (0xc00056d540) Stream added, broadcasting: 3\nI0326 23:44:51.265975 318 log.go:172] (0xc0005e6790) Reply frame received for 3\nI0326 23:44:51.266001 318 log.go:172] (0xc0005e6790) (0xc0006414a0) Create stream\nI0326 23:44:51.266014 318 log.go:172] (0xc0005e6790) (0xc0006414a0) Stream added, broadcasting: 5\nI0326 23:44:51.266914 318 log.go:172] (0xc0005e6790) Reply frame received for 5\nI0326 23:44:51.328576 318 log.go:172] (0xc0005e6790) Data frame received for 5\nI0326 23:44:51.328626 318 log.go:172] (0xc0006414a0) (5) Data frame handling\nI0326 23:44:51.328648 318 log.go:172] (0xc0006414a0) (5) Data frame sent\nI0326 23:44:51.328661 318 log.go:172] (0xc0005e6790) Data frame received for 5\nI0326 23:44:51.328676 318 log.go:172] (0xc0006414a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.89.149 80\nConnection to 10.96.89.149 80 port [tcp/http] succeeded!\nI0326 23:44:51.328758 318 log.go:172] (0xc0005e6790) Data frame received for 3\nI0326 23:44:51.328781 318 log.go:172] (0xc00056d540) (3) Data frame handling\nI0326 23:44:51.330702 318 log.go:172] (0xc0005e6790) Data frame received for 1\nI0326 23:44:51.330723 318 log.go:172] (0xc000641400) (1) Data frame handling\nI0326 23:44:51.330747 318 log.go:172] (0xc000641400) (1) Data frame sent\nI0326 23:44:51.330763 318 log.go:172] (0xc0005e6790) (0xc000641400) Stream removed, broadcasting: 1\nI0326 23:44:51.330779 318 log.go:172] (0xc0005e6790) Go away received\nI0326 23:44:51.331250 318 log.go:172] (0xc0005e6790) (0xc000641400) Stream removed, broadcasting: 1\nI0326 23:44:51.331276 318 log.go:172] (0xc0005e6790) (0xc00056d540) Stream removed, broadcasting: 3\nI0326 23:44:51.331288 318 log.go:172] (0xc0005e6790) (0xc0006414a0) Stream removed, broadcasting: 5\n" Mar 26 23:44:51.336: INFO: stdout: "" Mar 26 23:44:51.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1342 execpodgqjdg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31993' Mar 26 23:44:51.526: INFO: stderr: "I0326 23:44:51.464246 341 log.go:172] (0xc00056a0b0) (0xc000434be0) Create stream\nI0326 23:44:51.464307 341 log.go:172] (0xc00056a0b0) (0xc000434be0) Stream added, broadcasting: 1\nI0326 23:44:51.467343 341 log.go:172] (0xc00056a0b0) Reply frame received for 1\nI0326 23:44:51.467374 341 log.go:172] (0xc00056a0b0) (0xc000950000) Create stream\nI0326 23:44:51.467382 341 log.go:172] (0xc00056a0b0) (0xc000950000) Stream added, broadcasting: 3\nI0326 23:44:51.468289 341 log.go:172] (0xc00056a0b0) Reply frame received for 3\nI0326 23:44:51.468330 341 log.go:172] (0xc00056a0b0) (0xc00068f5e0) Create stream\nI0326 23:44:51.468341 341 log.go:172] (0xc00056a0b0) (0xc00068f5e0) Stream added, broadcasting: 5\nI0326 23:44:51.469263 341 log.go:172] (0xc00056a0b0) Reply frame received for 5\nI0326 23:44:51.519032 341 log.go:172] (0xc00056a0b0) Data frame received for 5\nI0326 23:44:51.519080 341 log.go:172] (0xc00068f5e0) (5) Data frame handling\nI0326 23:44:51.519121 341 log.go:172] (0xc00068f5e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31993\nConnection to 172.17.0.13 31993 port [tcp/31993] succeeded!\nI0326 23:44:51.519356 341 log.go:172] (0xc00056a0b0) Data frame received for 3\nI0326 23:44:51.519403 341 log.go:172] (0xc000950000) (3) Data frame handling\nI0326 23:44:51.519437 341 log.go:172] (0xc00056a0b0) Data frame received for 5\nI0326 23:44:51.519457 341 log.go:172] (0xc00068f5e0) (5) Data frame handling\nI0326 23:44:51.520803 341 log.go:172] (0xc00056a0b0) Data frame received for 1\nI0326 23:44:51.520836 341 log.go:172] (0xc000434be0) (1) Data frame handling\nI0326 23:44:51.520874 341 log.go:172] (0xc000434be0) (1) Data frame sent\nI0326 23:44:51.520906 341 log.go:172] (0xc00056a0b0) (0xc000434be0) Stream removed, broadcasting: 1\nI0326 23:44:51.521350 341 log.go:172] (0xc00056a0b0) Go away received\nI0326 23:44:51.521564 341 log.go:172] (0xc00056a0b0) (0xc000434be0) Stream removed, broadcasting: 1\nI0326 23:44:51.521600 341 log.go:172] (0xc00056a0b0) (0xc000950000) Stream removed, broadcasting: 3\nI0326 23:44:51.521620 341 log.go:172] (0xc00056a0b0) (0xc00068f5e0) Stream removed, broadcasting: 5\n" Mar 26 23:44:51.526: INFO: stdout: "" Mar 26 23:44:51.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1342 execpodgqjdg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31993' Mar 26 23:44:51.728: INFO: stderr: "I0326 23:44:51.664737 363 log.go:172] (0xc00054e840) (0xc00098a0a0) Create stream\nI0326 23:44:51.664800 363 log.go:172] (0xc00054e840) (0xc00098a0a0) Stream added, broadcasting: 1\nI0326 23:44:51.668228 363 log.go:172] (0xc00054e840) Reply frame received for 1\nI0326 23:44:51.668293 363 log.go:172] (0xc00054e840) (0xc0006af400) Create stream\nI0326 23:44:51.668320 363 log.go:172] (0xc00054e840) (0xc0006af400) Stream added, broadcasting: 3\nI0326 23:44:51.669617 363 log.go:172] (0xc00054e840) Reply frame received for 3\nI0326 23:44:51.669637 363 log.go:172] (0xc00054e840) (0xc0006af680) Create stream\nI0326 23:44:51.669643 363 log.go:172] (0xc00054e840) (0xc0006af680) Stream added, broadcasting: 5\nI0326 23:44:51.670787 363 log.go:172] (0xc00054e840) Reply frame received for 5\nI0326 23:44:51.721783 363 log.go:172] (0xc00054e840) Data frame received for 5\nI0326 23:44:51.721806 363 log.go:172] (0xc0006af680) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31993\nConnection to 172.17.0.12 31993 port [tcp/31993] succeeded!\nI0326 23:44:51.721860 363 log.go:172] (0xc00054e840) Data frame received for 3\nI0326 23:44:51.722058 363 log.go:172] (0xc0006af400) (3) Data frame handling\nI0326 23:44:51.722188 363 log.go:172] (0xc0006af680) (5) Data frame sent\nI0326 23:44:51.722212 363 log.go:172] (0xc00054e840) Data frame received for 5\nI0326 23:44:51.722220 363 log.go:172] (0xc0006af680) (5) Data frame handling\nI0326 23:44:51.723494 363 log.go:172] (0xc00054e840) Data frame received for 1\nI0326 23:44:51.723510 363 log.go:172] (0xc00098a0a0) (1) Data frame handling\nI0326 23:44:51.723517 363 log.go:172] (0xc00098a0a0) (1) Data frame sent\nI0326 23:44:51.723631 363 log.go:172] (0xc00054e840) (0xc00098a0a0) Stream removed, broadcasting: 1\nI0326 23:44:51.723708 363 log.go:172] (0xc00054e840) Go away received\nI0326 23:44:51.723962 363 log.go:172] (0xc00054e840) (0xc00098a0a0) Stream removed, broadcasting: 1\nI0326 23:44:51.723975 363 log.go:172] (0xc00054e840) (0xc0006af400) Stream removed, broadcasting: 3\nI0326 23:44:51.723983 363 log.go:172] (0xc00054e840) (0xc0006af680) Stream removed, broadcasting: 5\n" Mar 26 23:44:51.728: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:44:51.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1342" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.036 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":27,"skipped":445,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:44:51.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 23:44:51.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ba206e3-1234-4e7c-8f43-477d64a15153" in namespace "downward-api-7234" to be "Succeeded or Failed" Mar 26 23:44:51.843: INFO: Pod "downwardapi-volume-0ba206e3-1234-4e7c-8f43-477d64a15153": Phase="Pending", Reason="", readiness=false. Elapsed: 17.017545ms Mar 26 23:44:53.849: INFO: Pod "downwardapi-volume-0ba206e3-1234-4e7c-8f43-477d64a15153": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022850785s Mar 26 23:44:55.858: INFO: Pod "downwardapi-volume-0ba206e3-1234-4e7c-8f43-477d64a15153": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031444813s STEP: Saw pod success Mar 26 23:44:55.858: INFO: Pod "downwardapi-volume-0ba206e3-1234-4e7c-8f43-477d64a15153" satisfied condition "Succeeded or Failed" Mar 26 23:44:55.860: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0ba206e3-1234-4e7c-8f43-477d64a15153 container client-container: STEP: delete the pod Mar 26 23:44:55.879: INFO: Waiting for pod downwardapi-volume-0ba206e3-1234-4e7c-8f43-477d64a15153 to disappear Mar 26 23:44:55.884: INFO: Pod downwardapi-volume-0ba206e3-1234-4e7c-8f43-477d64a15153 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:44:55.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7234" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":458,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:44:55.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Mar 26 23:44:55.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-7793 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 26 23:44:56.042: INFO: stderr: "" Mar 26 23:44:56.042: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Mar 26 23:44:56.042: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 26 23:44:56.042: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7793" to be "running and ready, or succeeded" Mar 26 23:44:56.052: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.95426ms Mar 26 23:44:58.181: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138708752s Mar 26 23:45:00.185: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.14295105s Mar 26 23:45:00.185: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 26 23:45:00.185: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 26 23:45:00.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7793' Mar 26 23:45:00.281: INFO: stderr: "" Mar 26 23:45:00.281: INFO: stdout: "I0326 23:44:58.462461 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/q2k 287\nI0326 23:44:58.662791 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/b46 204\nI0326 23:44:58.862640 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/wl9n 352\nI0326 23:44:59.062668 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/gtx 382\nI0326 23:44:59.262619 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/69q 344\nI0326 23:44:59.462650 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/r26 487\nI0326 23:44:59.662606 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/9r4 362\nI0326 23:44:59.862731 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/9psr 240\nI0326 23:45:00.062649 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/2jz 464\nI0326 23:45:00.262635 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/p8x 242\n" STEP: limiting log lines Mar 26 23:45:00.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7793 --tail=1' Mar 26 23:45:00.386: INFO: stderr: "" Mar 26 23:45:00.386: INFO: stdout: "I0326 23:45:00.262635 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/p8x 242\n" Mar 26 23:45:00.386: INFO: got output "I0326 23:45:00.262635 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/p8x 242\n" STEP: limiting log bytes Mar 26 23:45:00.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7793 --limit-bytes=1' Mar 26 23:45:00.493: INFO: stderr: "" Mar 26 23:45:00.493: INFO: stdout: "I" Mar 26 23:45:00.493: INFO: got output "I" STEP: exposing timestamps Mar 26 23:45:00.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7793 --tail=1 --timestamps' Mar 26 23:45:00.594: INFO: stderr: "" Mar 26 23:45:00.594: INFO: stdout: "2020-03-26T23:45:00.463028021Z I0326 23:45:00.462851 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/w92 456\n" Mar 26 23:45:00.594: INFO: got output "2020-03-26T23:45:00.463028021Z I0326 23:45:00.462851 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/w92 456\n" STEP: restricting to a time range Mar 26 23:45:03.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7793 --since=1s' Mar 26 23:45:03.206: INFO: stderr: "" Mar 26 23:45:03.206: INFO: stdout: "I0326 23:45:02.262635 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/spc 398\nI0326 23:45:02.462660 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/l9t6 553\nI0326 23:45:02.662703 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/5ws 577\nI0326 23:45:02.862653 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/p8l 599\nI0326 23:45:03.062655 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/rbs 215\n" Mar 26 23:45:03.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7793 --since=24h' Mar 26 23:45:03.314: INFO: stderr: "" Mar 26 23:45:03.314: INFO: stdout: "I0326 23:44:58.462461 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/q2k 287\nI0326 23:44:58.662791 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/b46 204\nI0326 23:44:58.862640 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/wl9n 352\nI0326 23:44:59.062668 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/gtx 382\nI0326 23:44:59.262619 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/69q 344\nI0326 23:44:59.462650 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/r26 487\nI0326 23:44:59.662606 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/9r4 362\nI0326 23:44:59.862731 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/9psr 240\nI0326 23:45:00.062649 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/2jz 464\nI0326 23:45:00.262635 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/p8x 242\nI0326 23:45:00.462851 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/w92 456\nI0326 23:45:00.662773 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/cz2 255\nI0326 23:45:00.862620 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/dw6 287\nI0326 23:45:01.062682 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/qb4 540\nI0326 23:45:01.262602 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/ssck 325\nI0326 23:45:01.462664 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/s74 504\nI0326 23:45:01.662663 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/xx5s 590\nI0326 23:45:01.862711 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/xssb 572\nI0326 23:45:02.062675 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/dqb 432\nI0326 23:45:02.262635 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/spc 398\nI0326 23:45:02.462660 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/l9t6 553\nI0326 23:45:02.662703 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/5ws 577\nI0326 23:45:02.862653 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/p8l 599\nI0326 23:45:03.062655 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/rbs 215\nI0326 23:45:03.262681 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/2lz 279\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Mar 26 23:45:03.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7793' Mar 26 23:45:12.743: INFO: stderr: "" Mar 26 23:45:12.743: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:45:12.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7793" for this suite. • [SLOW TEST:16.860 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":29,"skipped":459,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:45:12.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:45:12.864: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 26 23:45:12.872: INFO: Number of nodes with available pods: 0 Mar 26 23:45:12.872: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 26 23:45:12.941: INFO: Number of nodes with available pods: 0 Mar 26 23:45:12.941: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:13.946: INFO: Number of nodes with available pods: 0 Mar 26 23:45:13.946: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:14.945: INFO: Number of nodes with available pods: 0 Mar 26 23:45:14.945: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:15.945: INFO: Number of nodes with available pods: 1 Mar 26 23:45:15.945: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 26 23:45:15.995: INFO: Number of nodes with available pods: 1 Mar 26 23:45:15.995: INFO: Number of running nodes: 0, number of available pods: 1 Mar 26 23:45:17.000: INFO: Number of nodes with available pods: 0 Mar 26 23:45:17.000: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 26 23:45:17.017: INFO: Number of nodes with available pods: 0 Mar 26 23:45:17.017: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:18.028: INFO: Number of nodes with available pods: 0 Mar 26 23:45:18.028: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:19.021: INFO: Number of nodes with available pods: 0 Mar 26 23:45:19.021: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:20.021: INFO: Number of nodes with available pods: 0 Mar 26 23:45:20.021: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:21.022: INFO: Number of nodes with available pods: 0 Mar 26 23:45:21.022: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:22.021: INFO: Number of nodes with available pods: 0 Mar 26 23:45:22.021: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:23.021: INFO: Number of nodes with available pods: 0 Mar 26 23:45:23.021: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:24.021: INFO: Number of nodes with available pods: 0 Mar 26 23:45:24.021: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:25.021: INFO: Number of nodes with available pods: 0 Mar 26 23:45:25.021: INFO: Node latest-worker is running more than one daemon pod Mar 26 23:45:26.020: INFO: Number of nodes with available pods: 1 Mar 26 23:45:26.020: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7883, will wait for the garbage collector to delete the pods Mar 26 23:45:26.086: INFO: Deleting DaemonSet.extensions daemon-set took: 6.417126ms Mar 26 23:45:26.386: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.349559ms Mar 26 23:45:32.807: INFO: Number of nodes with available pods: 0 Mar 26 23:45:32.807: INFO: Number of running nodes: 0, number of available pods: 0 Mar 26 23:45:32.813: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7883/daemonsets","resourceVersion":"3065507"},"items":null} Mar 26 23:45:32.815: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7883/pods","resourceVersion":"3065507"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:45:32.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7883" for this suite. • [SLOW TEST:20.158 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":30,"skipped":481,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:45:32.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 26 23:45:32.978: INFO: Created pod &Pod{ObjectMeta:{dns-5798 dns-5798 /api/v1/namespaces/dns-5798/pods/dns-5798 22e12b83-ec72-4ae1-8afb-7e0e2f42e0e1 3065514 0 2020-03-26 23:45:32 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlj7v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlj7v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlj7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 26 23:45:32.982: INFO: The status of Pod dns-5798 is Pending, waiting for it to be Running (with Ready = true) Mar 26 23:45:35.098: INFO: The status of Pod dns-5798 is Pending, waiting for it to be Running (with Ready = true) Mar 26 23:45:36.986: INFO: The status of Pod dns-5798 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 26 23:45:36.986: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5798 PodName:dns-5798 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 26 23:45:36.986: INFO: >>> kubeConfig: /root/.kube/config I0326 23:45:37.020845 7 log.go:172] (0xc0026c9080) (0xc000b9d5e0) Create stream I0326 23:45:37.020878 7 log.go:172] (0xc0026c9080) (0xc000b9d5e0) Stream added, broadcasting: 1 I0326 23:45:37.022799 7 log.go:172] (0xc0026c9080) Reply frame received for 1 I0326 23:45:37.022844 7 log.go:172] (0xc0026c9080) (0xc00032c1e0) Create stream I0326 23:45:37.022854 7 log.go:172] (0xc0026c9080) (0xc00032c1e0) Stream added, broadcasting: 3 I0326 23:45:37.023739 7 log.go:172] (0xc0026c9080) Reply frame received for 3 I0326 23:45:37.023796 7 log.go:172] (0xc0026c9080) (0xc000365ae0) Create stream I0326 23:45:37.023829 7 log.go:172] (0xc0026c9080) (0xc000365ae0) Stream added, broadcasting: 5 I0326 23:45:37.025264 7 log.go:172] (0xc0026c9080) Reply frame received for 5 I0326 23:45:37.127154 7 log.go:172] (0xc0026c9080) Data frame received for 3 I0326 23:45:37.127184 7 log.go:172] (0xc00032c1e0) (3) Data frame handling I0326 23:45:37.127210 7 log.go:172] (0xc00032c1e0) (3) Data frame sent I0326 23:45:37.127775 7 log.go:172] (0xc0026c9080) Data frame received for 3 I0326 23:45:37.127820 7 log.go:172] (0xc00032c1e0) (3) Data frame handling I0326 23:45:37.127901 7 log.go:172] (0xc0026c9080) Data frame received for 5 I0326 23:45:37.127916 7 log.go:172] (0xc000365ae0) (5) Data frame handling I0326 23:45:37.129847 7 log.go:172] (0xc0026c9080) Data frame received for 1 I0326 23:45:37.129875 7 log.go:172] (0xc000b9d5e0) (1) Data frame handling I0326 23:45:37.129889 7 log.go:172] (0xc000b9d5e0) (1) Data frame sent I0326 23:45:37.129974 7 log.go:172] (0xc0026c9080) (0xc000b9d5e0) Stream removed, broadcasting: 1 I0326 23:45:37.129998 7 log.go:172] (0xc0026c9080) Go away received I0326 23:45:37.130083 7 log.go:172] (0xc0026c9080) (0xc000b9d5e0) Stream removed, broadcasting: 1 I0326 23:45:37.130103 7 log.go:172] (0xc0026c9080) (0xc00032c1e0) Stream removed, broadcasting: 3 I0326 23:45:37.130113 7 log.go:172] (0xc0026c9080) (0xc000365ae0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 26 23:45:37.130: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5798 PodName:dns-5798 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 26 23:45:37.130: INFO: >>> kubeConfig: /root/.kube/config I0326 23:45:37.164135 7 log.go:172] (0xc0028b1340) (0xc000520f00) Create stream I0326 23:45:37.164158 7 log.go:172] (0xc0028b1340) (0xc000520f00) Stream added, broadcasting: 1 I0326 23:45:37.166335 7 log.go:172] (0xc0028b1340) Reply frame received for 1 I0326 23:45:37.166363 7 log.go:172] (0xc0028b1340) (0xc0005212c0) Create stream I0326 23:45:37.166375 7 log.go:172] (0xc0028b1340) (0xc0005212c0) Stream added, broadcasting: 3 I0326 23:45:37.167245 7 log.go:172] (0xc0028b1340) Reply frame received for 3 I0326 23:45:37.167276 7 log.go:172] (0xc0028b1340) (0xc000521c20) Create stream I0326 23:45:37.167287 7 log.go:172] (0xc0028b1340) (0xc000521c20) Stream added, broadcasting: 5 I0326 23:45:37.168300 7 log.go:172] (0xc0028b1340) Reply frame received for 5 I0326 23:45:37.256762 7 log.go:172] (0xc0028b1340) Data frame received for 3 I0326 23:45:37.256817 7 log.go:172] (0xc0005212c0) (3) Data frame handling I0326 23:45:37.256857 7 log.go:172] (0xc0005212c0) (3) Data frame sent I0326 23:45:37.257074 7 log.go:172] (0xc0028b1340) Data frame received for 5 I0326 23:45:37.257260 7 log.go:172] (0xc000521c20) (5) Data frame handling I0326 23:45:37.257370 7 log.go:172] (0xc0028b1340) Data frame received for 3 I0326 23:45:37.257405 7 log.go:172] (0xc0005212c0) (3) Data frame handling I0326 23:45:37.259001 7 log.go:172] (0xc0028b1340) Data frame received for 1 I0326 23:45:37.259044 7 log.go:172] (0xc000520f00) (1) Data frame handling I0326 23:45:37.259075 7 log.go:172] (0xc000520f00) (1) Data frame sent I0326 23:45:37.259102 7 log.go:172] (0xc0028b1340) (0xc000520f00) Stream removed, broadcasting: 1 I0326 23:45:37.259134 7 log.go:172] (0xc0028b1340) Go away received I0326 23:45:37.259261 7 log.go:172] (0xc0028b1340) (0xc000520f00) Stream removed, broadcasting: 1 I0326 23:45:37.259284 7 log.go:172] (0xc0028b1340) (0xc0005212c0) Stream removed, broadcasting: 3 I0326 23:45:37.259300 7 log.go:172] (0xc0028b1340) (0xc000521c20) Stream removed, broadcasting: 5 Mar 26 23:45:37.259: INFO: Deleting pod dns-5798... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:45:37.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5798" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":31,"skipped":495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:45:37.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:45:37.350: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6597 I0326 23:45:37.392358 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6597, replica count: 1 I0326 23:45:38.442786 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 23:45:39.442975 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 23:45:40.443195 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 26 23:45:40.556: INFO: Created: latency-svc-kbtq7 Mar 26 23:45:40.584: INFO: Got endpoints: latency-svc-kbtq7 [40.946891ms] Mar 26 23:45:40.610: INFO: Created: latency-svc-4w9kl Mar 26 23:45:40.623: INFO: Got endpoints: latency-svc-4w9kl [38.856292ms] Mar 26 23:45:40.639: INFO: Created: latency-svc-v8hsx Mar 26 23:45:40.652: INFO: Got endpoints: latency-svc-v8hsx [68.28443ms] Mar 26 23:45:40.678: INFO: Created: latency-svc-x2v6x Mar 26 23:45:40.751: INFO: Got endpoints: latency-svc-x2v6x [166.54461ms] Mar 26 23:45:40.754: INFO: Created: latency-svc-94rzz Mar 26 23:45:40.771: INFO: Got endpoints: latency-svc-94rzz [186.70368ms] Mar 26 23:45:40.796: INFO: Created: latency-svc-csmnj Mar 26 23:45:40.815: INFO: Got endpoints: latency-svc-csmnj [230.283916ms] Mar 26 23:45:40.843: INFO: Created: latency-svc-rg5zq Mar 26 23:45:40.894: INFO: Got endpoints: latency-svc-rg5zq [309.799221ms] Mar 26 23:45:40.895: INFO: Created: latency-svc-cdsnp Mar 26 23:45:40.905: INFO: Got endpoints: latency-svc-cdsnp [320.20165ms] Mar 26 23:45:40.923: INFO: Created: latency-svc-9jwvg Mar 26 23:45:40.935: INFO: Got endpoints: latency-svc-9jwvg [350.8601ms] Mar 26 23:45:40.947: INFO: Created: latency-svc-xjdmn Mar 26 23:45:40.959: INFO: Got endpoints: latency-svc-xjdmn [374.143823ms] Mar 26 23:45:40.976: INFO: Created: latency-svc-hzrxj Mar 26 23:45:40.989: INFO: Got endpoints: latency-svc-hzrxj [404.481711ms] Mar 26 23:45:41.026: INFO: Created: latency-svc-992sr Mar 26 23:45:41.044: INFO: Got endpoints: latency-svc-992sr [459.096764ms] Mar 26 23:45:41.092: INFO: Created: latency-svc-wgq27 Mar 26 23:45:41.103: INFO: Got endpoints: latency-svc-wgq27 [518.226962ms] Mar 26 23:45:41.119: INFO: Created: latency-svc-w9887 Mar 26 23:45:41.175: INFO: Got endpoints: latency-svc-w9887 [591.424589ms] Mar 26 23:45:41.178: INFO: Created: latency-svc-spqr9 Mar 26 23:45:41.189: INFO: Got endpoints: latency-svc-spqr9 [604.998027ms] Mar 26 23:45:41.235: INFO: Created: latency-svc-t8ks7 Mar 26 23:45:41.252: INFO: Got endpoints: latency-svc-t8ks7 [667.386328ms] Mar 26 23:45:41.272: INFO: Created: latency-svc-c2gsx Mar 26 23:45:41.319: INFO: Got endpoints: latency-svc-c2gsx [696.492533ms] Mar 26 23:45:41.321: INFO: Created: latency-svc-8rrqc Mar 26 23:45:41.329: INFO: Got endpoints: latency-svc-8rrqc [676.923066ms] Mar 26 23:45:41.347: INFO: Created: latency-svc-cn257 Mar 26 23:45:41.359: INFO: Got endpoints: latency-svc-cn257 [608.418549ms] Mar 26 23:45:41.378: INFO: Created: latency-svc-825dz Mar 26 23:45:41.390: INFO: Got endpoints: latency-svc-825dz [618.760993ms] Mar 26 23:45:41.409: INFO: Created: latency-svc-vwcsn Mar 26 23:45:41.469: INFO: Got endpoints: latency-svc-vwcsn [654.26511ms] Mar 26 23:45:41.471: INFO: Created: latency-svc-26n6x Mar 26 23:45:41.494: INFO: Got endpoints: latency-svc-26n6x [599.604103ms] Mar 26 23:45:41.494: INFO: Created: latency-svc-gchtp Mar 26 23:45:41.515: INFO: Got endpoints: latency-svc-gchtp [610.622872ms] Mar 26 23:45:41.539: INFO: Created: latency-svc-9s89l Mar 26 23:45:41.552: INFO: Got endpoints: latency-svc-9s89l [616.698463ms] Mar 26 23:45:41.600: INFO: Created: latency-svc-8csdw Mar 26 23:45:41.619: INFO: Created: latency-svc-xgm4f Mar 26 23:45:41.619: INFO: Got endpoints: latency-svc-8csdw [660.527135ms] Mar 26 23:45:41.636: INFO: Got endpoints: latency-svc-xgm4f [647.073999ms] Mar 26 23:45:41.650: INFO: Created: latency-svc-x2gn9 Mar 26 23:45:41.660: INFO: Got endpoints: latency-svc-x2gn9 [616.436105ms] Mar 26 23:45:41.673: INFO: Created: latency-svc-s84vm Mar 26 23:45:41.684: INFO: Got endpoints: latency-svc-s84vm [581.085499ms] Mar 26 23:45:41.733: INFO: Created: latency-svc-fwlcn Mar 26 23:45:41.750: INFO: Got endpoints: latency-svc-fwlcn [574.083036ms] Mar 26 23:45:41.750: INFO: Created: latency-svc-zjv85 Mar 26 23:45:41.773: INFO: Got endpoints: latency-svc-zjv85 [584.099024ms] Mar 26 23:45:41.803: INFO: Created: latency-svc-fm55s Mar 26 23:45:41.814: INFO: Got endpoints: latency-svc-fm55s [562.765794ms] Mar 26 23:45:41.859: INFO: Created: latency-svc-cc7sm Mar 26 23:45:41.881: INFO: Got endpoints: latency-svc-cc7sm [561.637638ms] Mar 26 23:45:41.925: INFO: Created: latency-svc-dtj2l Mar 26 23:45:41.996: INFO: Got endpoints: latency-svc-dtj2l [666.374487ms] Mar 26 23:45:42.001: INFO: Created: latency-svc-qsznj Mar 26 23:45:42.012: INFO: Got endpoints: latency-svc-qsznj [653.272504ms] Mar 26 23:45:42.053: INFO: Created: latency-svc-gjx48 Mar 26 23:45:42.069: INFO: Got endpoints: latency-svc-gjx48 [679.249696ms] Mar 26 23:45:42.070: INFO: Created: latency-svc-qn9vm Mar 26 23:45:42.085: INFO: Got endpoints: latency-svc-qn9vm [616.219847ms] Mar 26 23:45:42.152: INFO: Created: latency-svc-hvtqg Mar 26 23:45:42.169: INFO: Created: latency-svc-t8gt6 Mar 26 23:45:42.170: INFO: Got endpoints: latency-svc-hvtqg [675.996402ms] Mar 26 23:45:42.181: INFO: Got endpoints: latency-svc-t8gt6 [665.87857ms] Mar 26 23:45:42.200: INFO: Created: latency-svc-d7vfz Mar 26 23:45:42.211: INFO: Got endpoints: latency-svc-d7vfz [659.061716ms] Mar 26 23:45:42.237: INFO: Created: latency-svc-bdqps Mar 26 23:45:42.265: INFO: Got endpoints: latency-svc-bdqps [645.536455ms] Mar 26 23:45:42.279: INFO: Created: latency-svc-cqjjz Mar 26 23:45:42.310: INFO: Got endpoints: latency-svc-cqjjz [673.803984ms] Mar 26 23:45:42.337: INFO: Created: latency-svc-np68w Mar 26 23:45:42.355: INFO: Got endpoints: latency-svc-np68w [695.089729ms] Mar 26 23:45:42.397: INFO: Created: latency-svc-hkjpp Mar 26 23:45:42.414: INFO: Got endpoints: latency-svc-hkjpp [730.095805ms] Mar 26 23:45:42.427: INFO: Created: latency-svc-6rssg Mar 26 23:45:42.438: INFO: Got endpoints: latency-svc-6rssg [688.240243ms] Mar 26 23:45:42.459: INFO: Created: latency-svc-tmz72 Mar 26 23:45:42.489: INFO: Got endpoints: latency-svc-tmz72 [715.863672ms] Mar 26 23:45:42.534: INFO: Created: latency-svc-7lz2g Mar 26 23:45:42.539: INFO: Got endpoints: latency-svc-7lz2g [725.073515ms] Mar 26 23:45:42.565: INFO: Created: latency-svc-22nw2 Mar 26 23:45:42.582: INFO: Got endpoints: latency-svc-22nw2 [700.699786ms] Mar 26 23:45:42.595: INFO: Created: latency-svc-ggd8l Mar 26 23:45:42.606: INFO: Got endpoints: latency-svc-ggd8l [610.335925ms] Mar 26 23:45:42.619: INFO: Created: latency-svc-zdbj7 Mar 26 23:45:42.691: INFO: Got endpoints: latency-svc-zdbj7 [678.137311ms] Mar 26 23:45:42.712: INFO: Created: latency-svc-h4nm2 Mar 26 23:45:42.720: INFO: Got endpoints: latency-svc-h4nm2 [650.956945ms] Mar 26 23:45:42.745: INFO: Created: latency-svc-cbl7c Mar 26 23:45:42.763: INFO: Got endpoints: latency-svc-cbl7c [677.191093ms] Mar 26 23:45:42.781: INFO: Created: latency-svc-nw94v Mar 26 23:45:42.834: INFO: Got endpoints: latency-svc-nw94v [664.270674ms] Mar 26 23:45:42.836: INFO: Created: latency-svc-5b7ls Mar 26 23:45:42.840: INFO: Got endpoints: latency-svc-5b7ls [658.495988ms] Mar 26 23:45:42.861: INFO: Created: latency-svc-d56m7 Mar 26 23:45:42.870: INFO: Got endpoints: latency-svc-d56m7 [659.490016ms] Mar 26 23:45:42.904: INFO: Created: latency-svc-wkb79 Mar 26 23:45:42.912: INFO: Got endpoints: latency-svc-wkb79 [647.366859ms] Mar 26 23:45:42.933: INFO: Created: latency-svc-clcvt Mar 26 23:45:42.990: INFO: Got endpoints: latency-svc-clcvt [680.451298ms] Mar 26 23:45:42.991: INFO: Created: latency-svc-ngpjc Mar 26 23:45:43.001: INFO: Got endpoints: latency-svc-ngpjc [645.968518ms] Mar 26 23:45:43.030: INFO: Created: latency-svc-g4p6w Mar 26 23:45:43.049: INFO: Got endpoints: latency-svc-g4p6w [635.252563ms] Mar 26 23:45:43.141: INFO: Created: latency-svc-q9cfr Mar 26 23:45:43.171: INFO: Got endpoints: latency-svc-q9cfr [733.500991ms] Mar 26 23:45:43.172: INFO: Created: latency-svc-j28vx Mar 26 23:45:43.196: INFO: Got endpoints: latency-svc-j28vx [706.174387ms] Mar 26 23:45:43.226: INFO: Created: latency-svc-4p98c Mar 26 23:45:43.234: INFO: Got endpoints: latency-svc-4p98c [694.884153ms] Mar 26 23:45:43.296: INFO: Created: latency-svc-5rcbc Mar 26 23:45:43.318: INFO: Got endpoints: latency-svc-5rcbc [735.586408ms] Mar 26 23:45:43.341: INFO: Created: latency-svc-hm5pv Mar 26 23:45:43.354: INFO: Got endpoints: latency-svc-hm5pv [748.176895ms] Mar 26 23:45:43.381: INFO: Created: latency-svc-th7ld Mar 26 23:45:43.415: INFO: Got endpoints: latency-svc-th7ld [724.185408ms] Mar 26 23:45:43.436: INFO: Created: latency-svc-s4rp6 Mar 26 23:45:43.451: INFO: Got endpoints: latency-svc-s4rp6 [730.884541ms] Mar 26 23:45:43.540: INFO: Created: latency-svc-d7mkz Mar 26 23:45:43.548: INFO: Got endpoints: latency-svc-d7mkz [785.399811ms] Mar 26 23:45:43.569: INFO: Created: latency-svc-wntxv Mar 26 23:45:43.583: INFO: Got endpoints: latency-svc-wntxv [748.736533ms] Mar 26 23:45:43.605: INFO: Created: latency-svc-k4z88 Mar 26 23:45:43.622: INFO: Got endpoints: latency-svc-k4z88 [782.114447ms] Mar 26 23:45:43.636: INFO: Created: latency-svc-fdk5g Mar 26 23:45:43.678: INFO: Got endpoints: latency-svc-fdk5g [807.715958ms] Mar 26 23:45:43.706: INFO: Created: latency-svc-wdfl2 Mar 26 23:45:43.721: INFO: Got endpoints: latency-svc-wdfl2 [808.493025ms] Mar 26 23:45:43.743: INFO: Created: latency-svc-sh4lw Mar 26 23:45:43.755: INFO: Got endpoints: latency-svc-sh4lw [765.349607ms] Mar 26 23:45:43.773: INFO: Created: latency-svc-95l87 Mar 26 23:45:43.816: INFO: Got endpoints: latency-svc-95l87 [815.165767ms] Mar 26 23:45:43.843: INFO: Created: latency-svc-jj68l Mar 26 23:45:43.858: INFO: Got endpoints: latency-svc-jj68l [808.260149ms] Mar 26 23:45:43.886: INFO: Created: latency-svc-6r4vm Mar 26 23:45:43.936: INFO: Got endpoints: latency-svc-6r4vm [764.226178ms] Mar 26 23:45:43.965: INFO: Created: latency-svc-fbfkp Mar 26 23:45:43.989: INFO: Got endpoints: latency-svc-fbfkp [793.635475ms] Mar 26 23:45:44.013: INFO: Created: latency-svc-v2jtj Mar 26 23:45:44.026: INFO: Got endpoints: latency-svc-v2jtj [791.243389ms] Mar 26 23:45:44.062: INFO: Created: latency-svc-tbswf Mar 26 23:45:44.067: INFO: Got endpoints: latency-svc-tbswf [749.705443ms] Mar 26 23:45:44.089: INFO: Created: latency-svc-9nghj Mar 26 23:45:44.104: INFO: Got endpoints: latency-svc-9nghj [750.160573ms] Mar 26 23:45:44.126: INFO: Created: latency-svc-p4bjl Mar 26 23:45:44.155: INFO: Got endpoints: latency-svc-p4bjl [739.99572ms] Mar 26 23:45:44.193: INFO: Created: latency-svc-rr568 Mar 26 23:45:44.206: INFO: Got endpoints: latency-svc-rr568 [755.259052ms] Mar 26 23:45:44.223: INFO: Created: latency-svc-w2q9z Mar 26 23:45:44.236: INFO: Got endpoints: latency-svc-w2q9z [687.890946ms] Mar 26 23:45:44.254: INFO: Created: latency-svc-mhxbr Mar 26 23:45:44.267: INFO: Got endpoints: latency-svc-mhxbr [683.718468ms] Mar 26 23:45:44.281: INFO: Created: latency-svc-nq5t6 Mar 26 23:45:44.325: INFO: Got endpoints: latency-svc-nq5t6 [703.258706ms] Mar 26 23:45:44.342: INFO: Created: latency-svc-dqttj Mar 26 23:45:44.356: INFO: Got endpoints: latency-svc-dqttj [677.810611ms] Mar 26 23:45:44.391: INFO: Created: latency-svc-l4ztw Mar 26 23:45:44.409: INFO: Got endpoints: latency-svc-l4ztw [687.97031ms] Mar 26 23:45:44.463: INFO: Created: latency-svc-ft8km Mar 26 23:45:44.481: INFO: Created: latency-svc-4m6c8 Mar 26 23:45:44.481: INFO: Got endpoints: latency-svc-ft8km [725.906334ms] Mar 26 23:45:44.509: INFO: Got endpoints: latency-svc-4m6c8 [692.595248ms] Mar 26 23:45:44.540: INFO: Created: latency-svc-565jx Mar 26 23:45:44.553: INFO: Got endpoints: latency-svc-565jx [695.170192ms] Mar 26 23:45:44.599: INFO: Created: latency-svc-l6h2n Mar 26 23:45:44.612: INFO: Got endpoints: latency-svc-l6h2n [676.768819ms] Mar 26 23:45:44.631: INFO: Created: latency-svc-dk4q7 Mar 26 23:45:44.648: INFO: Got endpoints: latency-svc-dk4q7 [659.169356ms] Mar 26 23:45:44.667: INFO: Created: latency-svc-8nbsv Mar 26 23:45:44.726: INFO: Got endpoints: latency-svc-8nbsv [700.62637ms] Mar 26 23:45:44.756: INFO: Created: latency-svc-fzhqf Mar 26 23:45:44.769: INFO: Got endpoints: latency-svc-fzhqf [702.157654ms] Mar 26 23:45:44.797: INFO: Created: latency-svc-hsrvq Mar 26 23:45:44.811: INFO: Got endpoints: latency-svc-hsrvq [706.435165ms] Mar 26 23:45:44.864: INFO: Created: latency-svc-zx8fs Mar 26 23:45:44.883: INFO: Created: latency-svc-s8q5k Mar 26 23:45:44.883: INFO: Got endpoints: latency-svc-zx8fs [727.941743ms] Mar 26 23:45:44.919: INFO: Got endpoints: latency-svc-s8q5k [712.522569ms] Mar 26 23:45:44.941: INFO: Created: latency-svc-j65fv Mar 26 23:45:44.955: INFO: Got endpoints: latency-svc-j65fv [719.366585ms] Mar 26 23:45:44.990: INFO: Created: latency-svc-zbdhm Mar 26 23:45:44.997: INFO: Got endpoints: latency-svc-zbdhm [730.317844ms] Mar 26 23:45:45.019: INFO: Created: latency-svc-88ksq Mar 26 23:45:45.033: INFO: Got endpoints: latency-svc-88ksq [707.402828ms] Mar 26 23:45:45.152: INFO: Created: latency-svc-cqkrg Mar 26 23:45:45.187: INFO: Got endpoints: latency-svc-cqkrg [830.781705ms] Mar 26 23:45:45.187: INFO: Created: latency-svc-hkxlt Mar 26 23:45:45.199: INFO: Got endpoints: latency-svc-hkxlt [790.554314ms] Mar 26 23:45:45.216: INFO: Created: latency-svc-mhm4g Mar 26 23:45:45.230: INFO: Got endpoints: latency-svc-mhm4g [748.073522ms] Mar 26 23:45:45.247: INFO: Created: latency-svc-v6kj8 Mar 26 23:45:45.278: INFO: Got endpoints: latency-svc-v6kj8 [768.485367ms] Mar 26 23:45:45.283: INFO: Created: latency-svc-9m2hx Mar 26 23:45:45.295: INFO: Got endpoints: latency-svc-9m2hx [742.398868ms] Mar 26 23:45:45.315: INFO: Created: latency-svc-h7qrc Mar 26 23:45:45.332: INFO: Got endpoints: latency-svc-h7qrc [719.073674ms] Mar 26 23:45:45.351: INFO: Created: latency-svc-5xz7k Mar 26 23:45:45.367: INFO: Got endpoints: latency-svc-5xz7k [718.988902ms] Mar 26 23:45:45.403: INFO: Created: latency-svc-q2j44 Mar 26 23:45:45.420: INFO: Got endpoints: latency-svc-q2j44 [693.896583ms] Mar 26 23:45:45.421: INFO: Created: latency-svc-9zfp8 Mar 26 23:45:45.434: INFO: Got endpoints: latency-svc-9zfp8 [664.801648ms] Mar 26 23:45:45.451: INFO: Created: latency-svc-m59ss Mar 26 23:45:45.475: INFO: Got endpoints: latency-svc-m59ss [663.585024ms] Mar 26 23:45:45.499: INFO: Created: latency-svc-ns4zf Mar 26 23:45:45.522: INFO: Got endpoints: latency-svc-ns4zf [639.485172ms] Mar 26 23:45:45.537: INFO: Created: latency-svc-ql9z4 Mar 26 23:45:45.548: INFO: Got endpoints: latency-svc-ql9z4 [628.950766ms] Mar 26 23:45:45.560: INFO: Created: latency-svc-rtjkr Mar 26 23:45:45.572: INFO: Got endpoints: latency-svc-rtjkr [616.591253ms] Mar 26 23:45:45.585: INFO: Created: latency-svc-wxjbb Mar 26 23:45:45.607: INFO: Got endpoints: latency-svc-wxjbb [610.054242ms] Mar 26 23:45:45.661: INFO: Created: latency-svc-r926h Mar 26 23:45:45.679: INFO: Created: latency-svc-xwq5t Mar 26 23:45:45.679: INFO: Got endpoints: latency-svc-r926h [646.491369ms] Mar 26 23:45:45.691: INFO: Got endpoints: latency-svc-xwq5t [503.792268ms] Mar 26 23:45:45.711: INFO: Created: latency-svc-7kshn Mar 26 23:45:45.727: INFO: Got endpoints: latency-svc-7kshn [527.075967ms] Mar 26 23:45:45.747: INFO: Created: latency-svc-gk2qb Mar 26 23:45:45.798: INFO: Got endpoints: latency-svc-gk2qb [568.342856ms] Mar 26 23:45:45.817: INFO: Created: latency-svc-kdcq2 Mar 26 23:45:45.829: INFO: Got endpoints: latency-svc-kdcq2 [551.287807ms] Mar 26 23:45:45.846: INFO: Created: latency-svc-7v4x5 Mar 26 23:45:45.871: INFO: Got endpoints: latency-svc-7v4x5 [575.606645ms] Mar 26 23:45:45.889: INFO: Created: latency-svc-8zm9x Mar 26 23:45:45.930: INFO: Got endpoints: latency-svc-8zm9x [598.042725ms] Mar 26 23:45:45.957: INFO: Created: latency-svc-g5tfr Mar 26 23:45:45.973: INFO: Got endpoints: latency-svc-g5tfr [605.845615ms] Mar 26 23:45:46.017: INFO: Created: latency-svc-4lcfg Mar 26 23:45:46.027: INFO: Got endpoints: latency-svc-4lcfg [606.963829ms] Mar 26 23:45:46.074: INFO: Created: latency-svc-ppsz4 Mar 26 23:45:46.093: INFO: Got endpoints: latency-svc-ppsz4 [658.158502ms] Mar 26 23:45:46.093: INFO: Created: latency-svc-5jm58 Mar 26 23:45:46.117: INFO: Got endpoints: latency-svc-5jm58 [642.322512ms] Mar 26 23:45:46.149: INFO: Created: latency-svc-p4dn6 Mar 26 23:45:46.165: INFO: Got endpoints: latency-svc-p4dn6 [642.901602ms] Mar 26 23:45:46.205: INFO: Created: latency-svc-sd2vz Mar 26 23:45:46.227: INFO: Created: latency-svc-4m8n6 Mar 26 23:45:46.227: INFO: Got endpoints: latency-svc-sd2vz [679.347553ms] Mar 26 23:45:46.243: INFO: Got endpoints: latency-svc-4m8n6 [670.70854ms] Mar 26 23:45:46.273: INFO: Created: latency-svc-48vhn Mar 26 23:45:46.284: INFO: Got endpoints: latency-svc-48vhn [677.023117ms] Mar 26 23:45:46.302: INFO: Created: latency-svc-7qgsl Mar 26 23:45:46.343: INFO: Got endpoints: latency-svc-7qgsl [663.548009ms] Mar 26 23:45:46.365: INFO: Created: latency-svc-9s7bz Mar 26 23:45:46.380: INFO: Got endpoints: latency-svc-9s7bz [689.100625ms] Mar 26 23:45:46.401: INFO: Created: latency-svc-wpfb8 Mar 26 23:45:46.431: INFO: Got endpoints: latency-svc-wpfb8 [704.083555ms] Mar 26 23:45:46.481: INFO: Created: latency-svc-srgnr Mar 26 23:45:46.488: INFO: Got endpoints: latency-svc-srgnr [690.13142ms] Mar 26 23:45:46.508: INFO: Created: latency-svc-dz5qb Mar 26 23:45:46.523: INFO: Got endpoints: latency-svc-dz5qb [694.429399ms] Mar 26 23:45:46.537: INFO: Created: latency-svc-tgbkn Mar 26 23:45:46.547: INFO: Got endpoints: latency-svc-tgbkn [676.345579ms] Mar 26 23:45:46.567: INFO: Created: latency-svc-xhj2j Mar 26 23:45:46.578: INFO: Got endpoints: latency-svc-xhj2j [648.632913ms] Mar 26 23:45:46.625: INFO: Created: latency-svc-wm4df Mar 26 23:45:46.647: INFO: Got endpoints: latency-svc-wm4df [673.224929ms] Mar 26 23:45:46.648: INFO: Created: latency-svc-2d8z6 Mar 26 23:45:46.656: INFO: Got endpoints: latency-svc-2d8z6 [629.015261ms] Mar 26 23:45:46.679: INFO: Created: latency-svc-ss5lf Mar 26 23:45:46.686: INFO: Got endpoints: latency-svc-ss5lf [593.668574ms] Mar 26 23:45:46.704: INFO: Created: latency-svc-n7ftk Mar 26 23:45:46.716: INFO: Got endpoints: latency-svc-n7ftk [599.324574ms] Mar 26 23:45:46.756: INFO: Created: latency-svc-pnptr Mar 26 23:45:46.777: INFO: Got endpoints: latency-svc-pnptr [611.117471ms] Mar 26 23:45:46.797: INFO: Created: latency-svc-lwhlj Mar 26 23:45:46.813: INFO: Got endpoints: latency-svc-lwhlj [585.053418ms] Mar 26 23:45:46.833: INFO: Created: latency-svc-b2zsg Mar 26 23:45:46.847: INFO: Got endpoints: latency-svc-b2zsg [604.250377ms] Mar 26 23:45:46.888: INFO: Created: latency-svc-8hr4h Mar 26 23:45:46.895: INFO: Got endpoints: latency-svc-8hr4h [611.156033ms] Mar 26 23:45:46.921: INFO: Created: latency-svc-tg55k Mar 26 23:45:46.943: INFO: Got endpoints: latency-svc-tg55k [600.395932ms] Mar 26 23:45:46.957: INFO: Created: latency-svc-phc9v Mar 26 23:45:46.967: INFO: Got endpoints: latency-svc-phc9v [587.030034ms] Mar 26 23:45:46.980: INFO: Created: latency-svc-nm8rm Mar 26 23:45:47.008: INFO: Got endpoints: latency-svc-nm8rm [576.708047ms] Mar 26 23:45:47.019: INFO: Created: latency-svc-4c44r Mar 26 23:45:47.033: INFO: Got endpoints: latency-svc-4c44r [545.0638ms] Mar 26 23:45:47.055: INFO: Created: latency-svc-j7m2p Mar 26 23:45:47.069: INFO: Got endpoints: latency-svc-j7m2p [545.910942ms] Mar 26 23:45:47.101: INFO: Created: latency-svc-fllqj Mar 26 23:45:47.163: INFO: Got endpoints: latency-svc-fllqj [615.899763ms] Mar 26 23:45:47.165: INFO: Created: latency-svc-98lt2 Mar 26 23:45:47.172: INFO: Got endpoints: latency-svc-98lt2 [593.008829ms] Mar 26 23:45:47.193: INFO: Created: latency-svc-mrmtb Mar 26 23:45:47.208: INFO: Got endpoints: latency-svc-mrmtb [561.458662ms] Mar 26 23:45:47.242: INFO: Created: latency-svc-b5m79 Mar 26 23:45:47.250: INFO: Got endpoints: latency-svc-b5m79 [593.465403ms] Mar 26 23:45:47.289: INFO: Created: latency-svc-79k5n Mar 26 23:45:47.305: INFO: Got endpoints: latency-svc-79k5n [618.269141ms] Mar 26 23:45:47.335: INFO: Created: latency-svc-vcn4x Mar 26 23:45:47.352: INFO: Got endpoints: latency-svc-vcn4x [635.296271ms] Mar 26 23:45:47.371: INFO: Created: latency-svc-rrb6d Mar 26 23:45:47.388: INFO: Got endpoints: latency-svc-rrb6d [610.934856ms] Mar 26 23:45:47.421: INFO: Created: latency-svc-kptcv Mar 26 23:45:47.441: INFO: Got endpoints: latency-svc-kptcv [628.484328ms] Mar 26 23:45:47.469: INFO: Created: latency-svc-d4lcl Mar 26 23:45:47.482: INFO: Got endpoints: latency-svc-d4lcl [635.366305ms] Mar 26 23:45:47.497: INFO: Created: latency-svc-dg2h8 Mar 26 23:45:47.512: INFO: Got endpoints: latency-svc-dg2h8 [616.880384ms] Mar 26 23:45:47.577: INFO: Created: latency-svc-t26l7 Mar 26 23:45:47.587: INFO: Got endpoints: latency-svc-t26l7 [643.18309ms] Mar 26 23:45:47.611: INFO: Created: latency-svc-fd487 Mar 26 23:45:47.620: INFO: Got endpoints: latency-svc-fd487 [653.156958ms] Mar 26 23:45:47.642: INFO: Created: latency-svc-47gb8 Mar 26 23:45:47.656: INFO: Got endpoints: latency-svc-47gb8 [648.61158ms] Mar 26 23:45:47.697: INFO: Created: latency-svc-5f2xq Mar 26 23:45:47.721: INFO: Created: latency-svc-5l4c6 Mar 26 23:45:47.721: INFO: Got endpoints: latency-svc-5f2xq [687.993214ms] Mar 26 23:45:47.735: INFO: Got endpoints: latency-svc-5l4c6 [665.3722ms] Mar 26 23:45:47.755: INFO: Created: latency-svc-5bb57 Mar 26 23:45:47.771: INFO: Got endpoints: latency-svc-5bb57 [607.55584ms] Mar 26 23:45:47.791: INFO: Created: latency-svc-gws6z Mar 26 23:45:47.816: INFO: Got endpoints: latency-svc-gws6z [644.292194ms] Mar 26 23:45:47.833: INFO: Created: latency-svc-chq5x Mar 26 23:45:47.849: INFO: Got endpoints: latency-svc-chq5x [640.93758ms] Mar 26 23:45:47.870: INFO: Created: latency-svc-b6sv5 Mar 26 23:45:47.907: INFO: Got endpoints: latency-svc-b6sv5 [657.049256ms] Mar 26 23:45:47.966: INFO: Created: latency-svc-zhhs5 Mar 26 23:45:48.001: INFO: Got endpoints: latency-svc-zhhs5 [696.264317ms] Mar 26 23:45:48.001: INFO: Created: latency-svc-lhdws Mar 26 23:45:48.028: INFO: Got endpoints: latency-svc-lhdws [675.842608ms] Mar 26 23:45:48.050: INFO: Created: latency-svc-f7qkl Mar 26 23:45:48.128: INFO: Got endpoints: latency-svc-f7qkl [739.822435ms] Mar 26 23:45:48.145: INFO: Created: latency-svc-k94ht Mar 26 23:45:48.159: INFO: Got endpoints: latency-svc-k94ht [718.11287ms] Mar 26 23:45:48.181: INFO: Created: latency-svc-x2l2f Mar 26 23:45:48.189: INFO: Got endpoints: latency-svc-x2l2f [706.693822ms] Mar 26 23:45:48.205: INFO: Created: latency-svc-z4bx2 Mar 26 23:45:48.213: INFO: Got endpoints: latency-svc-z4bx2 [700.949677ms] Mar 26 23:45:48.254: INFO: Created: latency-svc-blpl7 Mar 26 23:45:48.272: INFO: Got endpoints: latency-svc-blpl7 [685.731928ms] Mar 26 23:45:48.273: INFO: Created: latency-svc-tp55p Mar 26 23:45:48.286: INFO: Got endpoints: latency-svc-tp55p [665.421414ms] Mar 26 23:45:48.315: INFO: Created: latency-svc-nl6bv Mar 26 23:45:48.328: INFO: Got endpoints: latency-svc-nl6bv [671.88017ms] Mar 26 23:45:48.385: INFO: Created: latency-svc-pmzz2 Mar 26 23:45:48.402: INFO: Got endpoints: latency-svc-pmzz2 [680.980496ms] Mar 26 23:45:48.403: INFO: Created: latency-svc-c7dpg Mar 26 23:45:48.418: INFO: Got endpoints: latency-svc-c7dpg [683.237959ms] Mar 26 23:45:48.439: INFO: Created: latency-svc-hqhtc Mar 26 23:45:48.454: INFO: Got endpoints: latency-svc-hqhtc [683.375103ms] Mar 26 23:45:48.471: INFO: Created: latency-svc-8k9sz Mar 26 23:45:48.484: INFO: Got endpoints: latency-svc-8k9sz [668.21165ms] Mar 26 23:45:48.530: INFO: Created: latency-svc-wjh8n Mar 26 23:45:48.544: INFO: Got endpoints: latency-svc-wjh8n [694.841216ms] Mar 26 23:45:48.561: INFO: Created: latency-svc-vqkkc Mar 26 23:45:48.574: INFO: Got endpoints: latency-svc-vqkkc [666.925403ms] Mar 26 23:45:48.595: INFO: Created: latency-svc-x78p6 Mar 26 23:45:48.666: INFO: Got endpoints: latency-svc-x78p6 [665.246073ms] Mar 26 23:45:48.668: INFO: Created: latency-svc-pf5g5 Mar 26 23:45:48.674: INFO: Got endpoints: latency-svc-pf5g5 [646.666621ms] Mar 26 23:45:48.692: INFO: Created: latency-svc-bw25f Mar 26 23:45:48.717: INFO: Got endpoints: latency-svc-bw25f [589.360585ms] Mar 26 23:45:48.747: INFO: Created: latency-svc-99npd Mar 26 23:45:48.759: INFO: Got endpoints: latency-svc-99npd [599.177769ms] Mar 26 23:45:48.800: INFO: Created: latency-svc-dfv6x Mar 26 23:45:48.816: INFO: Created: latency-svc-lfws9 Mar 26 23:45:48.816: INFO: Got endpoints: latency-svc-dfv6x [627.074758ms] Mar 26 23:45:48.841: INFO: Got endpoints: latency-svc-lfws9 [627.787464ms] Mar 26 23:45:48.867: INFO: Created: latency-svc-5gdsn Mar 26 23:45:48.879: INFO: Got endpoints: latency-svc-5gdsn [606.189538ms] Mar 26 23:45:48.897: INFO: Created: latency-svc-vcmpd Mar 26 23:45:48.966: INFO: Got endpoints: latency-svc-vcmpd [679.97222ms] Mar 26 23:45:48.968: INFO: Created: latency-svc-xlfjw Mar 26 23:45:48.975: INFO: Got endpoints: latency-svc-xlfjw [646.889229ms] Mar 26 23:45:48.997: INFO: Created: latency-svc-nbhk2 Mar 26 23:45:49.011: INFO: Got endpoints: latency-svc-nbhk2 [608.84623ms] Mar 26 23:45:49.039: INFO: Created: latency-svc-zrlx7 Mar 26 23:45:49.053: INFO: Got endpoints: latency-svc-zrlx7 [635.321773ms] Mar 26 23:45:49.122: INFO: Created: latency-svc-hn7hz Mar 26 23:45:49.142: INFO: Created: latency-svc-w8rgr Mar 26 23:45:49.142: INFO: Got endpoints: latency-svc-hn7hz [688.216323ms] Mar 26 23:45:49.155: INFO: Got endpoints: latency-svc-w8rgr [670.767895ms] Mar 26 23:45:49.189: INFO: Created: latency-svc-tznkk Mar 26 23:45:49.203: INFO: Got endpoints: latency-svc-tznkk [658.689393ms] Mar 26 23:45:49.253: INFO: Created: latency-svc-2md7k Mar 26 23:45:49.275: INFO: Got endpoints: latency-svc-2md7k [700.582076ms] Mar 26 23:45:49.275: INFO: Created: latency-svc-l7gxr Mar 26 23:45:49.292: INFO: Got endpoints: latency-svc-l7gxr [625.685233ms] Mar 26 23:45:49.311: INFO: Created: latency-svc-fln9l Mar 26 23:45:49.328: INFO: Got endpoints: latency-svc-fln9l [653.573883ms] Mar 26 23:45:49.346: INFO: Created: latency-svc-spdcr Mar 26 23:45:49.379: INFO: Got endpoints: latency-svc-spdcr [661.514817ms] Mar 26 23:45:49.392: INFO: Created: latency-svc-4cmcb Mar 26 23:45:49.406: INFO: Got endpoints: latency-svc-4cmcb [647.204957ms] Mar 26 23:45:49.423: INFO: Created: latency-svc-59hwn Mar 26 23:45:49.435: INFO: Got endpoints: latency-svc-59hwn [618.919878ms] Mar 26 23:45:49.435: INFO: Latencies: [38.856292ms 68.28443ms 166.54461ms 186.70368ms 230.283916ms 309.799221ms 320.20165ms 350.8601ms 374.143823ms 404.481711ms 459.096764ms 503.792268ms 518.226962ms 527.075967ms 545.0638ms 545.910942ms 551.287807ms 561.458662ms 561.637638ms 562.765794ms 568.342856ms 574.083036ms 575.606645ms 576.708047ms 581.085499ms 584.099024ms 585.053418ms 587.030034ms 589.360585ms 591.424589ms 593.008829ms 593.465403ms 593.668574ms 598.042725ms 599.177769ms 599.324574ms 599.604103ms 600.395932ms 604.250377ms 604.998027ms 605.845615ms 606.189538ms 606.963829ms 607.55584ms 608.418549ms 608.84623ms 610.054242ms 610.335925ms 610.622872ms 610.934856ms 611.117471ms 611.156033ms 615.899763ms 616.219847ms 616.436105ms 616.591253ms 616.698463ms 616.880384ms 618.269141ms 618.760993ms 618.919878ms 625.685233ms 627.074758ms 627.787464ms 628.484328ms 628.950766ms 629.015261ms 635.252563ms 635.296271ms 635.321773ms 635.366305ms 639.485172ms 640.93758ms 642.322512ms 642.901602ms 643.18309ms 644.292194ms 645.536455ms 645.968518ms 646.491369ms 646.666621ms 646.889229ms 647.073999ms 647.204957ms 647.366859ms 648.61158ms 648.632913ms 650.956945ms 653.156958ms 653.272504ms 653.573883ms 654.26511ms 657.049256ms 658.158502ms 658.495988ms 658.689393ms 659.061716ms 659.169356ms 659.490016ms 660.527135ms 661.514817ms 663.548009ms 663.585024ms 664.270674ms 664.801648ms 665.246073ms 665.3722ms 665.421414ms 665.87857ms 666.374487ms 666.925403ms 667.386328ms 668.21165ms 670.70854ms 670.767895ms 671.88017ms 673.224929ms 673.803984ms 675.842608ms 675.996402ms 676.345579ms 676.768819ms 676.923066ms 677.023117ms 677.191093ms 677.810611ms 678.137311ms 679.249696ms 679.347553ms 679.97222ms 680.451298ms 680.980496ms 683.237959ms 683.375103ms 683.718468ms 685.731928ms 687.890946ms 687.97031ms 687.993214ms 688.216323ms 688.240243ms 689.100625ms 690.13142ms 692.595248ms 693.896583ms 694.429399ms 694.841216ms 694.884153ms 695.089729ms 695.170192ms 696.264317ms 696.492533ms 700.582076ms 700.62637ms 700.699786ms 700.949677ms 702.157654ms 703.258706ms 704.083555ms 706.174387ms 706.435165ms 706.693822ms 707.402828ms 712.522569ms 715.863672ms 718.11287ms 718.988902ms 719.073674ms 719.366585ms 724.185408ms 725.073515ms 725.906334ms 727.941743ms 730.095805ms 730.317844ms 730.884541ms 733.500991ms 735.586408ms 739.822435ms 739.99572ms 742.398868ms 748.073522ms 748.176895ms 748.736533ms 749.705443ms 750.160573ms 755.259052ms 764.226178ms 765.349607ms 768.485367ms 782.114447ms 785.399811ms 790.554314ms 791.243389ms 793.635475ms 807.715958ms 808.260149ms 808.493025ms 815.165767ms 830.781705ms] Mar 26 23:45:49.435: INFO: 50 %ile: 661.514817ms Mar 26 23:45:49.436: INFO: 90 %ile: 742.398868ms Mar 26 23:45:49.436: INFO: 99 %ile: 815.165767ms Mar 26 23:45:49.436: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:45:49.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6597" for this suite. • [SLOW TEST:12.146 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":32,"skipped":523,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:45:49.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0326 23:45:50.602765 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 26 23:45:50.602: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:45:50.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6083" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":33,"skipped":525,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:45:50.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-b3b548cc-100f-4bbf-9a13-5c2515d59477 STEP: Creating a pod to test consume configMaps Mar 26 23:45:50.684: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90bf125f-9b13-4225-b4c5-0f87cc363710" in namespace "projected-4725" to be "Succeeded or Failed" Mar 26 23:45:50.721: INFO: Pod "pod-projected-configmaps-90bf125f-9b13-4225-b4c5-0f87cc363710": Phase="Pending", Reason="", readiness=false. Elapsed: 36.702884ms Mar 26 23:45:52.731: INFO: Pod "pod-projected-configmaps-90bf125f-9b13-4225-b4c5-0f87cc363710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046722243s Mar 26 23:45:54.879: INFO: Pod "pod-projected-configmaps-90bf125f-9b13-4225-b4c5-0f87cc363710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.195356199s STEP: Saw pod success Mar 26 23:45:54.879: INFO: Pod "pod-projected-configmaps-90bf125f-9b13-4225-b4c5-0f87cc363710" satisfied condition "Succeeded or Failed" Mar 26 23:45:54.909: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-90bf125f-9b13-4225-b4c5-0f87cc363710 container projected-configmap-volume-test: STEP: delete the pod Mar 26 23:45:55.005: INFO: Waiting for pod pod-projected-configmaps-90bf125f-9b13-4225-b4c5-0f87cc363710 to disappear Mar 26 23:45:55.034: INFO: Pod pod-projected-configmaps-90bf125f-9b13-4225-b4c5-0f87cc363710 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:45:55.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4725" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":526,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:45:55.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:45:55.390: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c4df4483-8049-4910-84d7-ecebbd3ebe2d" in namespace "security-context-test-4532" to be "Succeeded or Failed" Mar 26 23:45:55.510: INFO: Pod "busybox-user-65534-c4df4483-8049-4910-84d7-ecebbd3ebe2d": Phase="Pending", Reason="", readiness=false. Elapsed: 120.116262ms Mar 26 23:45:57.596: INFO: Pod "busybox-user-65534-c4df4483-8049-4910-84d7-ecebbd3ebe2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205206739s Mar 26 23:45:59.599: INFO: Pod "busybox-user-65534-c4df4483-8049-4910-84d7-ecebbd3ebe2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2084064s Mar 26 23:45:59.599: INFO: Pod "busybox-user-65534-c4df4483-8049-4910-84d7-ecebbd3ebe2d" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:45:59.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4532" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":531,"failed":0} S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:45:59.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:45:59.698: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 26 23:45:59.706: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 26 23:46:04.722: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 26 23:46:04.722: INFO: Creating deployment "test-rolling-update-deployment" Mar 26 23:46:04.750: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 26 23:46:04.773: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 26 23:46:06.926: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 26 23:46:06.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720863164, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720863164, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720863164, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720863164, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 26 23:46:09.051: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 26 23:46:09.189: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8838 /apis/apps/v1/namespaces/deployment-8838/deployments/test-rolling-update-deployment b663a37b-8178-4f96-81e0-125aadcee8a9 3066966 1 2020-03-26 23:46:04 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035b06a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-26 23:46:04 +0000 UTC,LastTransitionTime:2020-03-26 23:46:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-03-26 23:46:08 +0000 UTC,LastTransitionTime:2020-03-26 23:46:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 26 23:46:09.201: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-8838 /apis/apps/v1/namespaces/deployment-8838/replicasets/test-rolling-update-deployment-664dd8fc7f c4f06497-b11b-4b46-93ea-8f687dfe410b 3066952 1 2020-03-26 23:46:04 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b663a37b-8178-4f96-81e0-125aadcee8a9 0xc0035b0bc7 0xc0035b0bc8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035b0c38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 26 23:46:09.201: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 26 23:46:09.201: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8838 /apis/apps/v1/namespaces/deployment-8838/replicasets/test-rolling-update-controller 744b2fc9-f86d-493e-a893-5d0dfb1767fd 3066964 2 2020-03-26 23:45:59 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b663a37b-8178-4f96-81e0-125aadcee8a9 0xc0035b0af7 0xc0035b0af8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0035b0b58 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 26 23:46:09.227: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-fd575" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-fd575 test-rolling-update-deployment-664dd8fc7f- deployment-8838 /api/v1/namespaces/deployment-8838/pods/test-rolling-update-deployment-664dd8fc7f-fd575 3d04c01e-60a8-40e4-935a-627e133c582d 3066951 0 2020-03-26 23:46:04 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f c4f06497-b11b-4b46-93ea-8f687dfe410b 0xc0035b1107 0xc0035b1108}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4p4fw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4p4fw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4p4fw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 23:46:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 23:46:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 23:46:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-26 23:46:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.40,StartTime:2020-03-26 23:46:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-26 23:46:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://9c950d56725e2181ec7873e3eaab349427d567c501a3aed762ada055e98794d2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:46:09.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8838" for this suite. • [SLOW TEST:9.682 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":36,"skipped":532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:46:09.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 26 23:46:09.355: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 26 23:46:19.991: INFO: >>> kubeConfig: /root/.kube/config Mar 26 23:46:21.916: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:46:33.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6626" for this suite. • [SLOW TEST:24.229 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":37,"skipped":566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:46:33.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-fb00e9b1-2833-4167-a521-201cca76c6ef in namespace container-probe-1728 Mar 26 23:46:37.746: INFO: Started pod liveness-fb00e9b1-2833-4167-a521-201cca76c6ef in namespace container-probe-1728 STEP: checking the pod's current state and verifying that restartCount is present Mar 26 23:46:37.749: INFO: Initial restart count of pod liveness-fb00e9b1-2833-4167-a521-201cca76c6ef is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:50:38.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1728" for this suite. • [SLOW TEST:244.852 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:50:38.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-2885bfb7-7842-4c0b-a6b5-f99fdb9ca3f3 STEP: Creating secret with name s-test-opt-upd-d769bd6a-69b2-45c6-a1f7-eac787d65070 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2885bfb7-7842-4c0b-a6b5-f99fdb9ca3f3 STEP: Updating secret s-test-opt-upd-d769bd6a-69b2-45c6-a1f7-eac787d65070 STEP: Creating secret with name s-test-opt-create-5f9e6608-8222-49d5-b26b-1fb1e4f965f1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:51:53.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2321" for this suite. • [SLOW TEST:74.715 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":646,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:51:53.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 26 23:51:53.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3283' Mar 26 23:51:55.966: INFO: stderr: "" Mar 26 23:51:55.966: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Mar 26 23:51:55.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3283' Mar 26 23:52:03.032: INFO: stderr: "" Mar 26 23:52:03.032: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:52:03.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3283" for this suite. • [SLOW TEST:9.948 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":40,"skipped":671,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:52:03.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 26 23:52:03.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9506183a-80ad-4a51-b143-6501e9b8a65e" in namespace "downward-api-2504" to be "Succeeded or Failed" Mar 26 23:52:03.106: INFO: Pod "downwardapi-volume-9506183a-80ad-4a51-b143-6501e9b8a65e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.805937ms Mar 26 23:52:05.110: INFO: Pod "downwardapi-volume-9506183a-80ad-4a51-b143-6501e9b8a65e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014093364s Mar 26 23:52:07.115: INFO: Pod "downwardapi-volume-9506183a-80ad-4a51-b143-6501e9b8a65e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018303859s STEP: Saw pod success Mar 26 23:52:07.115: INFO: Pod "downwardapi-volume-9506183a-80ad-4a51-b143-6501e9b8a65e" satisfied condition "Succeeded or Failed" Mar 26 23:52:07.118: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9506183a-80ad-4a51-b143-6501e9b8a65e container client-container: STEP: delete the pod Mar 26 23:52:07.194: INFO: Waiting for pod downwardapi-volume-9506183a-80ad-4a51-b143-6501e9b8a65e to disappear Mar 26 23:52:07.202: INFO: Pod downwardapi-volume-9506183a-80ad-4a51-b143-6501e9b8a65e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:52:07.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2504" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":685,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:52:07.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-8473 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8473 to expose endpoints map[] Mar 26 23:52:07.287: INFO: Get endpoints failed (12.164393ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 26 23:52:08.295: INFO: successfully validated that service endpoint-test2 in namespace services-8473 exposes endpoints map[] (1.020307553s elapsed) STEP: Creating pod pod1 in namespace services-8473 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8473 to expose endpoints map[pod1:[80]] Mar 26 23:52:11.368: INFO: successfully validated that service endpoint-test2 in namespace services-8473 exposes endpoints map[pod1:[80]] (3.043574728s elapsed) STEP: Creating pod pod2 in namespace services-8473 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8473 to expose endpoints map[pod1:[80] pod2:[80]] Mar 26 23:52:15.515: INFO: successfully validated that service endpoint-test2 in namespace services-8473 exposes endpoints map[pod1:[80] pod2:[80]] (4.141929731s elapsed) STEP: Deleting pod pod1 in namespace services-8473 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8473 to expose endpoints map[pod2:[80]] Mar 26 23:52:16.548: INFO: successfully validated that service endpoint-test2 in namespace services-8473 exposes endpoints map[pod2:[80]] (1.028295128s elapsed) STEP: Deleting pod pod2 in namespace services-8473 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8473 to expose endpoints map[] Mar 26 23:52:17.565: INFO: successfully validated that service endpoint-test2 in namespace services-8473 exposes endpoints map[] (1.007405268s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:52:17.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8473" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.416 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":42,"skipped":695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:52:17.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Mar 26 23:52:17.732: INFO: Waiting up to 5m0s for pod "client-containers-b6aa66d4-9376-440d-ab52-30f8b42333c2" in namespace "containers-698" to be "Succeeded or Failed" Mar 26 23:52:17.736: INFO: Pod "client-containers-b6aa66d4-9376-440d-ab52-30f8b42333c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.773206ms Mar 26 23:52:19.741: INFO: Pod "client-containers-b6aa66d4-9376-440d-ab52-30f8b42333c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008501947s Mar 26 23:52:21.747: INFO: Pod "client-containers-b6aa66d4-9376-440d-ab52-30f8b42333c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015166324s STEP: Saw pod success Mar 26 23:52:21.748: INFO: Pod "client-containers-b6aa66d4-9376-440d-ab52-30f8b42333c2" satisfied condition "Succeeded or Failed" Mar 26 23:52:21.751: INFO: Trying to get logs from node latest-worker pod client-containers-b6aa66d4-9376-440d-ab52-30f8b42333c2 container test-container: STEP: delete the pod Mar 26 23:52:21.780: INFO: Waiting for pod client-containers-b6aa66d4-9376-440d-ab52-30f8b42333c2 to disappear Mar 26 23:52:21.794: INFO: Pod client-containers-b6aa66d4-9376-440d-ab52-30f8b42333c2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:52:21.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-698" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":734,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:52:21.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:52:21.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1506" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":44,"skipped":742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:52:21.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 26 23:52:21.934: INFO: Waiting up to 5m0s for pod "pod-65dabe4d-2ed5-4fc0-b727-d06ef29d003f" in namespace "emptydir-5199" to be "Succeeded or Failed" Mar 26 23:52:21.938: INFO: Pod "pod-65dabe4d-2ed5-4fc0-b727-d06ef29d003f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.804463ms Mar 26 23:52:23.942: INFO: Pod "pod-65dabe4d-2ed5-4fc0-b727-d06ef29d003f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007908037s Mar 26 23:52:25.959: INFO: Pod "pod-65dabe4d-2ed5-4fc0-b727-d06ef29d003f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024939688s STEP: Saw pod success Mar 26 23:52:25.959: INFO: Pod "pod-65dabe4d-2ed5-4fc0-b727-d06ef29d003f" satisfied condition "Succeeded or Failed" Mar 26 23:52:25.962: INFO: Trying to get logs from node latest-worker pod pod-65dabe4d-2ed5-4fc0-b727-d06ef29d003f container test-container: STEP: delete the pod Mar 26 23:52:25.982: INFO: Waiting for pod pod-65dabe4d-2ed5-4fc0-b727-d06ef29d003f to disappear Mar 26 23:52:25.986: INFO: Pod pod-65dabe4d-2ed5-4fc0-b727-d06ef29d003f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:52:25.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5199" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:52:25.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-5118a5ba-37ff-44d9-9b47-944a2e4d82cf in namespace container-probe-2836 Mar 26 23:52:30.118: INFO: Started pod busybox-5118a5ba-37ff-44d9-9b47-944a2e4d82cf in namespace container-probe-2836 STEP: checking the pod's current state and verifying that restartCount is present Mar 26 23:52:30.121: INFO: Initial restart count of pod busybox-5118a5ba-37ff-44d9-9b47-944a2e4d82cf is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:56:30.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2836" for this suite. • [SLOW TEST:244.844 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":849,"failed":0} S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:56:30.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 26 23:56:35.445: INFO: Successfully updated pod "annotationupdatee510c965-d2f1-4714-b7ad-8c7cfb94b2a2" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:56:37.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6470" for this suite. • [SLOW TEST:6.694 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":850,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:56:37.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 26 23:56:37.611: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 26 23:56:37.622: INFO: Waiting for terminating namespaces to be deleted... Mar 26 23:56:37.624: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 26 23:56:37.629: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 23:56:37.629: INFO: Container kube-proxy ready: true, restart count 0 Mar 26 23:56:37.629: INFO: annotationupdatee510c965-d2f1-4714-b7ad-8c7cfb94b2a2 from downward-api-6470 started at 2020-03-26 23:56:30 +0000 UTC (1 container statuses recorded) Mar 26 23:56:37.629: INFO: Container client-container ready: true, restart count 0 Mar 26 23:56:37.629: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 23:56:37.629: INFO: Container kindnet-cni ready: true, restart count 0 Mar 26 23:56:37.629: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 26 23:56:37.647: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 23:56:37.647: INFO: Container kube-proxy ready: true, restart count 0 Mar 26 23:56:37.647: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 26 23:56:37.647: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-809f96c7-ee8e-407c-9aee-7a3cd2f59412 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-809f96c7-ee8e-407c-9aee-7a3cd2f59412 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-809f96c7-ee8e-407c-9aee-7a3cd2f59412 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:56:45.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2185" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.273 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":48,"skipped":851,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:56:45.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:56:45.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 26 23:56:46.445: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T23:56:46Z generation:1 name:name1 resourceVersion:3069375 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4f8f5fff-4053-42c3-af60-7ed0318b093c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 26 23:56:56.450: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T23:56:56Z generation:1 name:name2 resourceVersion:3069425 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f41c227d-177a-4e34-bdbb-967516d4ef65] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 26 23:57:06.455: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T23:56:46Z generation:2 name:name1 resourceVersion:3069453 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4f8f5fff-4053-42c3-af60-7ed0318b093c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 26 23:57:16.460: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T23:56:56Z generation:2 name:name2 resourceVersion:3069483 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f41c227d-177a-4e34-bdbb-967516d4ef65] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 26 23:57:26.487: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T23:56:46Z generation:2 name:name1 resourceVersion:3069512 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4f8f5fff-4053-42c3-af60-7ed0318b093c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 26 23:57:36.494: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-26T23:56:56Z generation:2 name:name2 resourceVersion:3069542 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f41c227d-177a-4e34-bdbb-967516d4ef65] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:57:47.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1581" for this suite. • [SLOW TEST:61.207 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":49,"skipped":865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:57:47.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 26 23:57:55.186: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 23:57:55.206: INFO: Pod pod-with-prestop-exec-hook still exists Mar 26 23:57:57.206: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 23:57:57.211: INFO: Pod pod-with-prestop-exec-hook still exists Mar 26 23:57:59.206: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 23:57:59.210: INFO: Pod pod-with-prestop-exec-hook still exists Mar 26 23:58:01.207: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 23:58:01.210: INFO: Pod pod-with-prestop-exec-hook still exists Mar 26 23:58:03.206: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 26 23:58:03.209: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:58:03.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1325" for this suite. • [SLOW TEST:16.231 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":889,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:58:03.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0326 23:58:13.344844 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 26 23:58:13.344: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:58:13.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4595" for this suite. • [SLOW TEST:10.106 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":51,"skipped":891,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:58:13.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2381 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2381 STEP: creating replication controller externalsvc in namespace services-2381 I0326 23:58:13.547534 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2381, replica count: 2 I0326 23:58:16.598114 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 23:58:19.598348 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 26 23:58:19.655: INFO: Creating new exec pod Mar 26 23:58:23.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2381 execpodfkxrv -- /bin/sh -x -c nslookup nodeport-service' Mar 26 23:58:23.916: INFO: stderr: "I0326 23:58:23.803516 608 log.go:172] (0xc00003aa50) (0xc0006c1400) Create stream\nI0326 23:58:23.803595 608 log.go:172] (0xc00003aa50) (0xc0006c1400) Stream added, broadcasting: 1\nI0326 23:58:23.807411 608 log.go:172] (0xc00003aa50) Reply frame received for 1\nI0326 23:58:23.807458 608 log.go:172] (0xc00003aa50) (0xc0009fa000) Create stream\nI0326 23:58:23.807473 608 log.go:172] (0xc00003aa50) (0xc0009fa000) Stream added, broadcasting: 3\nI0326 23:58:23.808691 608 log.go:172] (0xc00003aa50) Reply frame received for 3\nI0326 23:58:23.808726 608 log.go:172] (0xc00003aa50) (0xc0004e0000) Create stream\nI0326 23:58:23.808743 608 log.go:172] (0xc00003aa50) (0xc0004e0000) Stream added, broadcasting: 5\nI0326 23:58:23.809834 608 log.go:172] (0xc00003aa50) Reply frame received for 5\nI0326 23:58:23.897028 608 log.go:172] (0xc00003aa50) Data frame received for 5\nI0326 23:58:23.897060 608 log.go:172] (0xc0004e0000) (5) Data frame handling\nI0326 23:58:23.897083 608 log.go:172] (0xc0004e0000) (5) Data frame sent\n+ nslookup nodeport-service\nI0326 23:58:23.907151 608 log.go:172] (0xc00003aa50) Data frame received for 3\nI0326 23:58:23.907177 608 log.go:172] (0xc0009fa000) (3) Data frame handling\nI0326 23:58:23.907208 608 log.go:172] (0xc0009fa000) (3) Data frame sent\nI0326 23:58:23.908510 608 log.go:172] (0xc00003aa50) Data frame received for 3\nI0326 23:58:23.908528 608 log.go:172] (0xc0009fa000) (3) Data frame handling\nI0326 23:58:23.908543 608 log.go:172] (0xc0009fa000) (3) Data frame sent\nI0326 23:58:23.908838 608 log.go:172] (0xc00003aa50) Data frame received for 3\nI0326 23:58:23.908866 608 log.go:172] (0xc0009fa000) (3) Data frame handling\nI0326 23:58:23.908886 608 log.go:172] (0xc00003aa50) Data frame received for 5\nI0326 23:58:23.908899 608 log.go:172] (0xc0004e0000) (5) Data frame handling\nI0326 23:58:23.911204 608 log.go:172] (0xc00003aa50) Data frame received for 1\nI0326 23:58:23.911245 608 log.go:172] (0xc0006c1400) (1) Data frame handling\nI0326 23:58:23.911256 608 log.go:172] (0xc0006c1400) (1) Data frame sent\nI0326 23:58:23.911282 608 log.go:172] (0xc00003aa50) (0xc0006c1400) Stream removed, broadcasting: 1\nI0326 23:58:23.911304 608 log.go:172] (0xc00003aa50) Go away received\nI0326 23:58:23.911825 608 log.go:172] (0xc00003aa50) (0xc0006c1400) Stream removed, broadcasting: 1\nI0326 23:58:23.911848 608 log.go:172] (0xc00003aa50) (0xc0009fa000) Stream removed, broadcasting: 3\nI0326 23:58:23.911860 608 log.go:172] (0xc00003aa50) (0xc0004e0000) Stream removed, broadcasting: 5\n" Mar 26 23:58:23.916: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2381.svc.cluster.local\tcanonical name = externalsvc.services-2381.svc.cluster.local.\nName:\texternalsvc.services-2381.svc.cluster.local\nAddress: 10.96.148.17\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2381, will wait for the garbage collector to delete the pods Mar 26 23:58:23.977: INFO: Deleting ReplicationController externalsvc took: 6.87432ms Mar 26 23:58:24.277: INFO: Terminating ReplicationController externalsvc pods took: 300.259297ms Mar 26 23:58:33.115: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:58:33.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2381" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:19.788 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":52,"skipped":896,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:58:33.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 26 23:58:37.886: INFO: Successfully updated pod "labelsupdatefca52112-8c62-4dbd-b60d-6a7baafd110c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:58:39.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-324" for this suite. • [SLOW TEST:6.778 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":922,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:58:39.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:58:40.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4449" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":54,"skipped":928,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:58:40.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:58:40.157: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.662285ms) Mar 26 23:58:40.161: INFO: (1) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.077475ms) Mar 26 23:58:40.169: INFO: (2) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.717906ms) Mar 26 23:58:40.172: INFO: (3) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.795255ms) Mar 26 23:58:40.174: INFO: (4) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.726912ms) Mar 26 23:58:40.178: INFO: (5) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.788014ms) Mar 26 23:58:40.206: INFO: (6) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 27.367993ms) Mar 26 23:58:40.209: INFO: (7) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.54495ms) Mar 26 23:58:40.219: INFO: (8) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 9.425919ms) Mar 26 23:58:40.222: INFO: (9) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.599853ms) Mar 26 23:58:40.227: INFO: (10) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.459464ms) Mar 26 23:58:40.231: INFO: (11) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.38126ms) Mar 26 23:58:40.234: INFO: (12) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.007897ms) Mar 26 23:58:40.237: INFO: (13) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.881391ms) Mar 26 23:58:40.240: INFO: (14) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.787642ms) Mar 26 23:58:40.243: INFO: (15) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.783012ms) Mar 26 23:58:40.246: INFO: (16) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.71389ms) Mar 26 23:58:40.248: INFO: (17) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.467744ms) Mar 26 23:58:40.251: INFO: (18) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.628573ms) Mar 26 23:58:40.254: INFO: (19) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.157698ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:58:40.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3572" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":55,"skipped":949,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:58:40.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:58:40.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4839" for this suite. STEP: Destroying namespace "nspatchtest-ee0f8f06-cf2b-46cc-8c1c-8740afbba0cb-2934" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":56,"skipped":1010,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:58:40.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8224 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8224 I0326 23:58:40.579654 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8224, replica count: 2 I0326 23:58:43.630377 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0326 23:58:46.630614 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 26 23:58:46.630: INFO: Creating new exec pod Mar 26 23:58:51.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8224 execpoddj6lx -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 26 23:58:51.917: INFO: stderr: "I0326 23:58:51.826659 630 log.go:172] (0xc000a509a0) (0xc000a260a0) Create stream\nI0326 23:58:51.826718 630 log.go:172] (0xc000a509a0) (0xc000a260a0) Stream added, broadcasting: 1\nI0326 23:58:51.829655 630 log.go:172] (0xc000a509a0) Reply frame received for 1\nI0326 23:58:51.829692 630 log.go:172] (0xc000a509a0) (0xc000623180) Create stream\nI0326 23:58:51.829701 630 log.go:172] (0xc000a509a0) (0xc000623180) Stream added, broadcasting: 3\nI0326 23:58:51.830702 630 log.go:172] (0xc000a509a0) Reply frame received for 3\nI0326 23:58:51.830756 630 log.go:172] (0xc000a509a0) (0xc000a26140) Create stream\nI0326 23:58:51.830780 630 log.go:172] (0xc000a509a0) (0xc000a26140) Stream added, broadcasting: 5\nI0326 23:58:51.831893 630 log.go:172] (0xc000a509a0) Reply frame received for 5\nI0326 23:58:51.909515 630 log.go:172] (0xc000a509a0) Data frame received for 5\nI0326 23:58:51.909553 630 log.go:172] (0xc000a26140) (5) Data frame handling\nI0326 23:58:51.909582 630 log.go:172] (0xc000a26140) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0326 23:58:51.910238 630 log.go:172] (0xc000a509a0) Data frame received for 5\nI0326 23:58:51.910273 630 log.go:172] (0xc000a26140) (5) Data frame handling\nI0326 23:58:51.910302 630 log.go:172] (0xc000a26140) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0326 23:58:51.910538 630 log.go:172] (0xc000a509a0) Data frame received for 3\nI0326 23:58:51.910559 630 log.go:172] (0xc000623180) (3) Data frame handling\nI0326 23:58:51.910683 630 log.go:172] (0xc000a509a0) Data frame received for 5\nI0326 23:58:51.910702 630 log.go:172] (0xc000a26140) (5) Data frame handling\nI0326 23:58:51.912358 630 log.go:172] (0xc000a509a0) Data frame received for 1\nI0326 23:58:51.912380 630 log.go:172] (0xc000a260a0) (1) Data frame handling\nI0326 23:58:51.912402 630 log.go:172] (0xc000a260a0) (1) Data frame sent\nI0326 23:58:51.912416 630 log.go:172] (0xc000a509a0) (0xc000a260a0) Stream removed, broadcasting: 1\nI0326 23:58:51.912559 630 log.go:172] (0xc000a509a0) Go away received\nI0326 23:58:51.912877 630 log.go:172] (0xc000a509a0) (0xc000a260a0) Stream removed, broadcasting: 1\nI0326 23:58:51.912896 630 log.go:172] (0xc000a509a0) (0xc000623180) Stream removed, broadcasting: 3\nI0326 23:58:51.912906 630 log.go:172] (0xc000a509a0) (0xc000a26140) Stream removed, broadcasting: 5\n" Mar 26 23:58:51.917: INFO: stdout: "" Mar 26 23:58:51.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8224 execpoddj6lx -- /bin/sh -x -c nc -zv -t -w 2 10.96.251.167 80' Mar 26 23:58:52.121: INFO: stderr: "I0326 23:58:52.038578 654 log.go:172] (0xc0000e9e40) (0xc00091c0a0) Create stream\nI0326 23:58:52.038624 654 log.go:172] (0xc0000e9e40) (0xc00091c0a0) Stream added, broadcasting: 1\nI0326 23:58:52.040750 654 log.go:172] (0xc0000e9e40) Reply frame received for 1\nI0326 23:58:52.040797 654 log.go:172] (0xc0000e9e40) (0xc00071f2c0) Create stream\nI0326 23:58:52.040810 654 log.go:172] (0xc0000e9e40) (0xc00071f2c0) Stream added, broadcasting: 3\nI0326 23:58:52.041739 654 log.go:172] (0xc0000e9e40) Reply frame received for 3\nI0326 23:58:52.041786 654 log.go:172] (0xc0000e9e40) (0xc00091c140) Create stream\nI0326 23:58:52.041799 654 log.go:172] (0xc0000e9e40) (0xc00091c140) Stream added, broadcasting: 5\nI0326 23:58:52.042480 654 log.go:172] (0xc0000e9e40) Reply frame received for 5\nI0326 23:58:52.113748 654 log.go:172] (0xc0000e9e40) Data frame received for 5\nI0326 23:58:52.113912 654 log.go:172] (0xc00091c140) (5) Data frame handling\nI0326 23:58:52.114008 654 log.go:172] (0xc00091c140) (5) Data frame sent\nI0326 23:58:52.114046 654 log.go:172] (0xc0000e9e40) Data frame received for 5\nI0326 23:58:52.114070 654 log.go:172] (0xc00091c140) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.251.167 80\nConnection to 10.96.251.167 80 port [tcp/http] succeeded!\nI0326 23:58:52.114124 654 log.go:172] (0xc0000e9e40) Data frame received for 3\nI0326 23:58:52.114158 654 log.go:172] (0xc00071f2c0) (3) Data frame handling\nI0326 23:58:52.116100 654 log.go:172] (0xc0000e9e40) Data frame received for 1\nI0326 23:58:52.116129 654 log.go:172] (0xc00091c0a0) (1) Data frame handling\nI0326 23:58:52.116144 654 log.go:172] (0xc00091c0a0) (1) Data frame sent\nI0326 23:58:52.116159 654 log.go:172] (0xc0000e9e40) (0xc00091c0a0) Stream removed, broadcasting: 1\nI0326 23:58:52.116227 654 log.go:172] (0xc0000e9e40) Go away received\nI0326 23:58:52.116536 654 log.go:172] (0xc0000e9e40) (0xc00091c0a0) Stream removed, broadcasting: 1\nI0326 23:58:52.116570 654 log.go:172] (0xc0000e9e40) (0xc00071f2c0) Stream removed, broadcasting: 3\nI0326 23:58:52.116583 654 log.go:172] (0xc0000e9e40) (0xc00091c140) Stream removed, broadcasting: 5\n" Mar 26 23:58:52.121: INFO: stdout: "" Mar 26 23:58:52.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8224 execpoddj6lx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30074' Mar 26 23:58:52.330: INFO: stderr: "I0326 23:58:52.250149 677 log.go:172] (0xc000598790) (0xc0005541e0) Create stream\nI0326 23:58:52.250211 677 log.go:172] (0xc000598790) (0xc0005541e0) Stream added, broadcasting: 1\nI0326 23:58:52.253939 677 log.go:172] (0xc000598790) Reply frame received for 1\nI0326 23:58:52.253978 677 log.go:172] (0xc000598790) (0xc000554280) Create stream\nI0326 23:58:52.253985 677 log.go:172] (0xc000598790) (0xc000554280) Stream added, broadcasting: 3\nI0326 23:58:52.255175 677 log.go:172] (0xc000598790) Reply frame received for 3\nI0326 23:58:52.255213 677 log.go:172] (0xc000598790) (0xc000a38000) Create stream\nI0326 23:58:52.255222 677 log.go:172] (0xc000598790) (0xc000a38000) Stream added, broadcasting: 5\nI0326 23:58:52.256386 677 log.go:172] (0xc000598790) Reply frame received for 5\nI0326 23:58:52.324348 677 log.go:172] (0xc000598790) Data frame received for 5\nI0326 23:58:52.324397 677 log.go:172] (0xc000a38000) (5) Data frame handling\nI0326 23:58:52.324437 677 log.go:172] (0xc000a38000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30074\nConnection to 172.17.0.13 30074 port [tcp/30074] succeeded!\nI0326 23:58:52.324720 677 log.go:172] (0xc000598790) Data frame received for 5\nI0326 23:58:52.324853 677 log.go:172] (0xc000a38000) (5) Data frame handling\nI0326 23:58:52.324905 677 log.go:172] (0xc000598790) Data frame received for 3\nI0326 23:58:52.324930 677 log.go:172] (0xc000554280) (3) Data frame handling\nI0326 23:58:52.326733 677 log.go:172] (0xc000598790) Data frame received for 1\nI0326 23:58:52.326779 677 log.go:172] (0xc0005541e0) (1) Data frame handling\nI0326 23:58:52.326814 677 log.go:172] (0xc0005541e0) (1) Data frame sent\nI0326 23:58:52.326831 677 log.go:172] (0xc000598790) (0xc0005541e0) Stream removed, broadcasting: 1\nI0326 23:58:52.326971 677 log.go:172] (0xc000598790) Go away received\nI0326 23:58:52.327239 677 log.go:172] (0xc000598790) (0xc0005541e0) Stream removed, broadcasting: 1\nI0326 23:58:52.327256 677 log.go:172] (0xc000598790) (0xc000554280) Stream removed, broadcasting: 3\nI0326 23:58:52.327267 677 log.go:172] (0xc000598790) (0xc000a38000) Stream removed, broadcasting: 5\n" Mar 26 23:58:52.331: INFO: stdout: "" Mar 26 23:58:52.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8224 execpoddj6lx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30074' Mar 26 23:58:52.558: INFO: stderr: "I0326 23:58:52.467973 697 log.go:172] (0xc000900c60) (0xc000b28640) Create stream\nI0326 23:58:52.468030 697 log.go:172] (0xc000900c60) (0xc000b28640) Stream added, broadcasting: 1\nI0326 23:58:52.473664 697 log.go:172] (0xc000900c60) Reply frame received for 1\nI0326 23:58:52.473707 697 log.go:172] (0xc000900c60) (0xc0007af5e0) Create stream\nI0326 23:58:52.473719 697 log.go:172] (0xc000900c60) (0xc0007af5e0) Stream added, broadcasting: 3\nI0326 23:58:52.474703 697 log.go:172] (0xc000900c60) Reply frame received for 3\nI0326 23:58:52.474744 697 log.go:172] (0xc000900c60) (0xc00053aa00) Create stream\nI0326 23:58:52.474759 697 log.go:172] (0xc000900c60) (0xc00053aa00) Stream added, broadcasting: 5\nI0326 23:58:52.475556 697 log.go:172] (0xc000900c60) Reply frame received for 5\nI0326 23:58:52.551336 697 log.go:172] (0xc000900c60) Data frame received for 5\nI0326 23:58:52.551387 697 log.go:172] (0xc00053aa00) (5) Data frame handling\nI0326 23:58:52.551426 697 log.go:172] (0xc00053aa00) (5) Data frame sent\nI0326 23:58:52.551450 697 log.go:172] (0xc000900c60) Data frame received for 5\nI0326 23:58:52.551466 697 log.go:172] (0xc00053aa00) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30074\nConnection to 172.17.0.12 30074 port [tcp/30074] succeeded!\nI0326 23:58:52.551506 697 log.go:172] (0xc00053aa00) (5) Data frame sent\nI0326 23:58:52.551894 697 log.go:172] (0xc000900c60) Data frame received for 3\nI0326 23:58:52.551926 697 log.go:172] (0xc0007af5e0) (3) Data frame handling\nI0326 23:58:52.551961 697 log.go:172] (0xc000900c60) Data frame received for 5\nI0326 23:58:52.551987 697 log.go:172] (0xc00053aa00) (5) Data frame handling\nI0326 23:58:52.553842 697 log.go:172] (0xc000900c60) Data frame received for 1\nI0326 23:58:52.553863 697 log.go:172] (0xc000b28640) (1) Data frame handling\nI0326 23:58:52.553876 697 log.go:172] (0xc000b28640) (1) Data frame sent\nI0326 23:58:52.553897 697 log.go:172] (0xc000900c60) (0xc000b28640) Stream removed, broadcasting: 1\nI0326 23:58:52.553954 697 log.go:172] (0xc000900c60) Go away received\nI0326 23:58:52.554366 697 log.go:172] (0xc000900c60) (0xc000b28640) Stream removed, broadcasting: 1\nI0326 23:58:52.554397 697 log.go:172] (0xc000900c60) (0xc0007af5e0) Stream removed, broadcasting: 3\nI0326 23:58:52.554410 697 log.go:172] (0xc000900c60) (0xc00053aa00) Stream removed, broadcasting: 5\n" Mar 26 23:58:52.558: INFO: stdout: "" Mar 26 23:58:52.558: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:58:52.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8224" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.198 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":57,"skipped":1013,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:58:52.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8980.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8980.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 26 23:58:58.785: INFO: DNS probes using dns-test-e168042e-dcd9-4ba1-ab7c-f245826c9323 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8980.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8980.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 26 23:59:04.974: INFO: File wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local from pod dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 23:59:04.977: INFO: File jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local from pod dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 23:59:04.977: INFO: Lookups using dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae failed for: [wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local] Mar 26 23:59:09.982: INFO: File wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local from pod dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 23:59:09.986: INFO: File jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local from pod dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 23:59:09.987: INFO: Lookups using dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae failed for: [wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local] Mar 26 23:59:14.982: INFO: File wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local from pod dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 23:59:14.985: INFO: File jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local from pod dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 23:59:14.985: INFO: Lookups using dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae failed for: [wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local] Mar 26 23:59:19.983: INFO: File wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local from pod dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 23:59:19.986: INFO: File jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local from pod dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 23:59:19.986: INFO: Lookups using dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae failed for: [wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local] Mar 26 23:59:24.983: INFO: File wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local from pod dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 23:59:24.987: INFO: File jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local from pod dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 26 23:59:24.987: INFO: Lookups using dns-8980/dns-test-cc722065-9a15-4beb-927d-786c64d680ae failed for: [wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local] Mar 26 23:59:29.987: INFO: DNS probes using dns-test-cc722065-9a15-4beb-927d-786c64d680ae succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8980.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8980.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8980.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8980.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 26 23:59:36.489: INFO: DNS probes using dns-test-2c336f76-ec70-41b5-8638-c3cee58df284 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:59:36.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8980" for this suite. • [SLOW TEST:43.990 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":58,"skipped":1018,"failed":0} SS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:59:36.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:59:37.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6220" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":59,"skipped":1020,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:59:37.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 26 23:59:37.138: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 26 23:59:40.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2888 create -f -' Mar 26 23:59:43.435: INFO: stderr: "" Mar 26 23:59:43.436: INFO: stdout: "e2e-test-crd-publish-openapi-2350-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 26 23:59:43.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2888 delete e2e-test-crd-publish-openapi-2350-crds test-foo' Mar 26 23:59:43.564: INFO: stderr: "" Mar 26 23:59:43.564: INFO: stdout: "e2e-test-crd-publish-openapi-2350-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 26 23:59:43.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2888 apply -f -' Mar 26 23:59:43.823: INFO: stderr: "" Mar 26 23:59:43.824: INFO: stdout: "e2e-test-crd-publish-openapi-2350-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 26 23:59:43.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2888 delete e2e-test-crd-publish-openapi-2350-crds test-foo' Mar 26 23:59:43.948: INFO: stderr: "" Mar 26 23:59:43.949: INFO: stdout: "e2e-test-crd-publish-openapi-2350-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 26 23:59:43.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2888 create -f -' Mar 26 23:59:44.179: INFO: rc: 1 Mar 26 23:59:44.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2888 apply -f -' Mar 26 23:59:44.410: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 26 23:59:44.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2888 create -f -' Mar 26 23:59:44.660: INFO: rc: 1 Mar 26 23:59:44.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2888 apply -f -' Mar 26 23:59:44.892: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 26 23:59:44.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2350-crds' Mar 26 23:59:45.146: INFO: stderr: "" Mar 26 23:59:45.147: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2350-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 26 23:59:45.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2350-crds.metadata' Mar 26 23:59:45.379: INFO: stderr: "" Mar 26 23:59:45.379: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2350-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 26 23:59:45.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2350-crds.spec' Mar 26 23:59:45.645: INFO: stderr: "" Mar 26 23:59:45.646: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2350-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 26 23:59:45.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2350-crds.spec.bars' Mar 26 23:59:45.889: INFO: stderr: "" Mar 26 23:59:45.889: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2350-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 26 23:59:45.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2350-crds.spec.bars2' Mar 26 23:59:46.119: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:59:49.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2888" for this suite. • [SLOW TEST:11.988 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":60,"skipped":1040,"failed":0} SSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:59:49.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Mar 26 23:59:49.108: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5917" to be "Succeeded or Failed" Mar 26 23:59:49.157: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 49.775678ms Mar 26 23:59:51.161: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053773502s Mar 26 23:59:53.165: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057441596s Mar 26 23:59:55.169: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061265932s STEP: Saw pod success Mar 26 23:59:55.169: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 26 23:59:55.171: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 26 23:59:55.243: INFO: Waiting for pod pod-host-path-test to disappear Mar 26 23:59:55.250: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 26 23:59:55.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5917" for this suite. • [SLOW TEST:6.242 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1048,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 26 23:59:55.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 26 23:59:55.835: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 26 23:59:57.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720863995, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720863995, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720863995, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720863995, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:00:00.856: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:00:00.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:00:02.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2495" for this suite. STEP: Destroying namespace "webhook-2495-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.904 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":62,"skipped":1060,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:00:02.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 27 00:00:02.235: INFO: Waiting up to 5m0s for pod "pod-0ecd9978-1b2f-4912-a896-0cb947a95b8c" in namespace "emptydir-1414" to be "Succeeded or Failed" Mar 27 00:00:02.238: INFO: Pod "pod-0ecd9978-1b2f-4912-a896-0cb947a95b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.030656ms Mar 27 00:00:04.252: INFO: Pod "pod-0ecd9978-1b2f-4912-a896-0cb947a95b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01778275s Mar 27 00:00:06.258: INFO: Pod "pod-0ecd9978-1b2f-4912-a896-0cb947a95b8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02377455s STEP: Saw pod success Mar 27 00:00:06.258: INFO: Pod "pod-0ecd9978-1b2f-4912-a896-0cb947a95b8c" satisfied condition "Succeeded or Failed" Mar 27 00:00:06.268: INFO: Trying to get logs from node latest-worker pod pod-0ecd9978-1b2f-4912-a896-0cb947a95b8c container test-container: STEP: delete the pod Mar 27 00:00:06.287: INFO: Waiting for pod pod-0ecd9978-1b2f-4912-a896-0cb947a95b8c to disappear Mar 27 00:00:06.325: INFO: Pod pod-0ecd9978-1b2f-4912-a896-0cb947a95b8c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:00:06.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1414" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:00:06.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-867ca3bd-0334-42a1-b1e6-9bce7cc36602 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-867ca3bd-0334-42a1-b1e6-9bce7cc36602 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:01:38.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7787" for this suite. • [SLOW TEST:92.647 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1084,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:01:38.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0327 00:02:19.275014 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 27 00:02:19.275: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:02:19.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3658" for this suite. • [SLOW TEST:40.302 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":65,"skipped":1093,"failed":0} SS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:02:19.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8710 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8710;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8710 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8710;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8710.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8710.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8710.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8710.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8710.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8710.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8710.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8710.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8710.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8710.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8710.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 102.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.102_udp@PTR;check="$$(dig +tcp +noall +answer +search 102.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.102_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8710 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8710;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8710 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8710;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8710.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8710.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8710.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8710.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8710.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8710.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8710.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8710.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8710.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8710.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8710.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8710.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 102.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.102_udp@PTR;check="$$(dig +tcp +noall +answer +search 102.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.102_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 27 00:02:25.579: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.605: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.608: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.803: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.841: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.846: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.850: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.854: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.868: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.871: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.969: INFO: Unable to read jessie_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.979: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.982: INFO: Unable to read jessie_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.985: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.987: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:25.990: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:26.044: INFO: Lookups using dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8710 wheezy_tcp@dns-test-service.dns-8710 wheezy_udp@dns-test-service.dns-8710.svc wheezy_tcp@dns-test-service.dns-8710.svc wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8710 jessie_tcp@dns-test-service.dns-8710 jessie_udp@dns-test-service.dns-8710.svc jessie_tcp@dns-test-service.dns-8710.svc jessie_udp@_http._tcp.dns-test-service.dns-8710.svc jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc] Mar 27 00:02:31.050: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.054: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.058: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.061: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.065: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.068: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.071: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.074: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.097: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.099: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.102: INFO: Unable to read jessie_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.106: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.109: INFO: Unable to read jessie_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.115: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.117: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:31.132: INFO: Lookups using dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8710 wheezy_tcp@dns-test-service.dns-8710 wheezy_udp@dns-test-service.dns-8710.svc wheezy_tcp@dns-test-service.dns-8710.svc wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8710 jessie_tcp@dns-test-service.dns-8710 jessie_udp@dns-test-service.dns-8710.svc jessie_tcp@dns-test-service.dns-8710.svc jessie_udp@_http._tcp.dns-test-service.dns-8710.svc jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc] Mar 27 00:02:36.049: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.051: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.054: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.057: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.059: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.062: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.064: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.067: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.088: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.091: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.094: INFO: Unable to read jessie_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.097: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.100: INFO: Unable to read jessie_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.102: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.106: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.109: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:36.127: INFO: Lookups using dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8710 wheezy_tcp@dns-test-service.dns-8710 wheezy_udp@dns-test-service.dns-8710.svc wheezy_tcp@dns-test-service.dns-8710.svc wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8710 jessie_tcp@dns-test-service.dns-8710 jessie_udp@dns-test-service.dns-8710.svc jessie_tcp@dns-test-service.dns-8710.svc jessie_udp@_http._tcp.dns-test-service.dns-8710.svc jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc] Mar 27 00:02:41.050: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.053: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.057: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.061: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.064: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.067: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.071: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.074: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.098: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.101: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.104: INFO: Unable to read jessie_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.106: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.109: INFO: Unable to read jessie_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.114: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.116: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:41.133: INFO: Lookups using dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8710 wheezy_tcp@dns-test-service.dns-8710 wheezy_udp@dns-test-service.dns-8710.svc wheezy_tcp@dns-test-service.dns-8710.svc wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8710 jessie_tcp@dns-test-service.dns-8710 jessie_udp@dns-test-service.dns-8710.svc jessie_tcp@dns-test-service.dns-8710.svc jessie_udp@_http._tcp.dns-test-service.dns-8710.svc jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc] Mar 27 00:02:46.049: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.053: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.057: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.060: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.064: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.067: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.069: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.072: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.092: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.094: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.098: INFO: Unable to read jessie_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.101: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.104: INFO: Unable to read jessie_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.107: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.124: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.128: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:46.147: INFO: Lookups using dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8710 wheezy_tcp@dns-test-service.dns-8710 wheezy_udp@dns-test-service.dns-8710.svc wheezy_tcp@dns-test-service.dns-8710.svc wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8710 jessie_tcp@dns-test-service.dns-8710 jessie_udp@dns-test-service.dns-8710.svc jessie_tcp@dns-test-service.dns-8710.svc jessie_udp@_http._tcp.dns-test-service.dns-8710.svc jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc] Mar 27 00:02:51.050: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.054: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.057: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.063: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.068: INFO: Unable to read wheezy_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.072: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.075: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.078: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.097: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.100: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.103: INFO: Unable to read jessie_udp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.107: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710 from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.110: INFO: Unable to read jessie_udp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.113: INFO: Unable to read jessie_tcp@dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.116: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.118: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc from pod dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94: the server could not find the requested resource (get pods dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94) Mar 27 00:02:51.135: INFO: Lookups using dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8710 wheezy_tcp@dns-test-service.dns-8710 wheezy_udp@dns-test-service.dns-8710.svc wheezy_tcp@dns-test-service.dns-8710.svc wheezy_udp@_http._tcp.dns-test-service.dns-8710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8710 jessie_tcp@dns-test-service.dns-8710 jessie_udp@dns-test-service.dns-8710.svc jessie_tcp@dns-test-service.dns-8710.svc jessie_udp@_http._tcp.dns-test-service.dns-8710.svc jessie_tcp@_http._tcp.dns-test-service.dns-8710.svc] Mar 27 00:02:56.154: INFO: DNS probes using dns-8710/dns-test-47b0b369-39ef-4115-b2f1-8aaf07b0fa94 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:02:56.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8710" for this suite. • [SLOW TEST:37.510 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":66,"skipped":1095,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:02:56.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 27 00:03:01.545: INFO: Successfully updated pod "labelsupdated11b86a6-af95-4f15-8073-8eba081f4a13" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:03:03.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2398" for this suite. • [SLOW TEST:6.789 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:03:03.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:03:03.670: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3434ec4-027e-4f9d-94cc-7dd4f85f1a32" in namespace "downward-api-2266" to be "Succeeded or Failed" Mar 27 00:03:03.674: INFO: Pod "downwardapi-volume-a3434ec4-027e-4f9d-94cc-7dd4f85f1a32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.839935ms Mar 27 00:03:05.723: INFO: Pod "downwardapi-volume-a3434ec4-027e-4f9d-94cc-7dd4f85f1a32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052652331s Mar 27 00:03:07.727: INFO: Pod "downwardapi-volume-a3434ec4-027e-4f9d-94cc-7dd4f85f1a32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056555681s STEP: Saw pod success Mar 27 00:03:07.727: INFO: Pod "downwardapi-volume-a3434ec4-027e-4f9d-94cc-7dd4f85f1a32" satisfied condition "Succeeded or Failed" Mar 27 00:03:07.729: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a3434ec4-027e-4f9d-94cc-7dd4f85f1a32 container client-container: STEP: delete the pod Mar 27 00:03:07.747: INFO: Waiting for pod downwardapi-volume-a3434ec4-027e-4f9d-94cc-7dd4f85f1a32 to disappear Mar 27 00:03:07.767: INFO: Pod downwardapi-volume-a3434ec4-027e-4f9d-94cc-7dd4f85f1a32 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:03:07.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2266" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1131,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:03:07.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:03:07.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6950" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":69,"skipped":1142,"failed":0} SSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:03:07.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:03:07.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9481" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":70,"skipped":1145,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:03:07.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 27 00:03:08.084: INFO: Waiting up to 5m0s for pod "downward-api-3f73532d-812e-4fe8-95c8-f4bb6ead932f" in namespace "downward-api-1154" to be "Succeeded or Failed" Mar 27 00:03:08.087: INFO: Pod "downward-api-3f73532d-812e-4fe8-95c8-f4bb6ead932f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.617241ms Mar 27 00:03:10.091: INFO: Pod "downward-api-3f73532d-812e-4fe8-95c8-f4bb6ead932f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006861416s Mar 27 00:03:12.095: INFO: Pod "downward-api-3f73532d-812e-4fe8-95c8-f4bb6ead932f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011112467s STEP: Saw pod success Mar 27 00:03:12.095: INFO: Pod "downward-api-3f73532d-812e-4fe8-95c8-f4bb6ead932f" satisfied condition "Succeeded or Failed" Mar 27 00:03:12.099: INFO: Trying to get logs from node latest-worker pod downward-api-3f73532d-812e-4fe8-95c8-f4bb6ead932f container dapi-container: STEP: delete the pod Mar 27 00:03:12.282: INFO: Waiting for pod downward-api-3f73532d-812e-4fe8-95c8-f4bb6ead932f to disappear Mar 27 00:03:12.320: INFO: Pod downward-api-3f73532d-812e-4fe8-95c8-f4bb6ead932f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:03:12.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1154" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1151,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:03:12.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:03:12.627: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e5a95df-1e35-41f1-bb3e-b803b861cdd1" in namespace "downward-api-9835" to be "Succeeded or Failed" Mar 27 00:03:12.638: INFO: Pod "downwardapi-volume-1e5a95df-1e35-41f1-bb3e-b803b861cdd1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.859465ms Mar 27 00:03:14.663: INFO: Pod "downwardapi-volume-1e5a95df-1e35-41f1-bb3e-b803b861cdd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036264045s Mar 27 00:03:16.668: INFO: Pod "downwardapi-volume-1e5a95df-1e35-41f1-bb3e-b803b861cdd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040865645s STEP: Saw pod success Mar 27 00:03:16.668: INFO: Pod "downwardapi-volume-1e5a95df-1e35-41f1-bb3e-b803b861cdd1" satisfied condition "Succeeded or Failed" Mar 27 00:03:16.671: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1e5a95df-1e35-41f1-bb3e-b803b861cdd1 container client-container: STEP: delete the pod Mar 27 00:03:16.708: INFO: Waiting for pod downwardapi-volume-1e5a95df-1e35-41f1-bb3e-b803b861cdd1 to disappear Mar 27 00:03:16.721: INFO: Pod downwardapi-volume-1e5a95df-1e35-41f1-bb3e-b803b861cdd1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:03:16.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9835" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1157,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:03:16.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:03:16.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8472" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":73,"skipped":1158,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:03:16.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 27 00:03:16.950: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 27 00:03:21.953: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:03:21.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1863" for this suite. • [SLOW TEST:5.170 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":74,"skipped":1160,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:03:22.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:03:22.100: INFO: Creating deployment "webserver-deployment" Mar 27 00:03:22.105: INFO: Waiting for observed generation 1 Mar 27 00:03:24.145: INFO: Waiting for all required pods to come up Mar 27 00:03:24.150: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 27 00:03:32.160: INFO: Waiting for deployment "webserver-deployment" to complete Mar 27 00:03:32.166: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 27 00:03:32.172: INFO: Updating deployment webserver-deployment Mar 27 00:03:32.172: INFO: Waiting for observed generation 2 Mar 27 00:03:34.198: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 27 00:03:34.201: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 27 00:03:34.203: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 27 00:03:34.210: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 27 00:03:34.210: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 27 00:03:34.212: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 27 00:03:34.216: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 27 00:03:34.216: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 27 00:03:34.220: INFO: Updating deployment webserver-deployment Mar 27 00:03:34.220: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 27 00:03:34.322: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 27 00:03:34.358: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 27 00:03:34.658: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6104 /apis/apps/v1/namespaces/deployment-6104/deployments/webserver-deployment 5207413c-84d6-4efa-a6a6-c43673535cc9 3071833 3 2020-03-27 00:03:22 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00225eeb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-27 00:03:32 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-27 00:03:34 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 27 00:03:34.735: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6104 /apis/apps/v1/namespaces/deployment-6104/replicasets/webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 3071888 3 2020-03-27 00:03:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5207413c-84d6-4efa-a6a6-c43673535cc9 0xc00225f407 0xc00225f408}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00225f478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 27 00:03:34.735: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 27 00:03:34.735: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6104 /apis/apps/v1/namespaces/deployment-6104/replicasets/webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 3071878 3 2020-03-27 00:03:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5207413c-84d6-4efa-a6a6-c43673535cc9 0xc00225f347 0xc00225f348}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00225f3a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 27 00:03:34.820: INFO: Pod "webserver-deployment-595b5b9587-5bkws" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5bkws webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-5bkws 82125ffb-3b21-43e2-a8c0-3035a4a0c8d9 3071847 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc00225fbb7 0xc00225fbb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.820: INFO: Pod "webserver-deployment-595b5b9587-7gdkg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7gdkg webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-7gdkg dc367d58-fdf7-4f4b-ae11-53c83fc3a396 3071835 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc00225fcd7 0xc00225fcd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.825: INFO: Pod "webserver-deployment-595b5b9587-7qc2g" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7qc2g webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-7qc2g aab8aa68-fad9-4781-bd14-420cc5ffa062 3071751 0 2020-03-27 00:03:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc00225fea7 0xc00225fea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.68,StartTime:2020-03-27 00:03:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:03:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://95e63859d8a07df0380d3296782a25b48898f7481e6f7853ab5ad1234e3a2a80,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.825: INFO: Pod "webserver-deployment-595b5b9587-7s4d6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7s4d6 webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-7s4d6 30928281-9967-4a08-8de9-e5cdb81e60aa 3071839 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc0042840c7 0xc0042840c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.826: INFO: Pod "webserver-deployment-595b5b9587-865gk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-865gk webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-865gk 77c3ad33-0e4d-4e1b-89b7-204b9e1bbde4 3071710 0 2020-03-27 00:03:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc0042841e7 0xc0042841e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.199,StartTime:2020-03-27 00:03:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:03:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8fddf088543f63c26e5043078c72fe59c781b82627da9df728423ad2bf2486a8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.199,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.826: INFO: Pod "webserver-deployment-595b5b9587-87dmp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-87dmp webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-87dmp f81a9a0e-83d8-470a-811e-064f489aca65 3071724 0 2020-03-27 00:03:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc004284367 0xc004284368}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.66,StartTime:2020-03-27 00:03:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:03:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b60b017b4b84621e10f0a5fbaf17aa0f9b5360edd1a707817198608e518216a6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.826: INFO: Pod "webserver-deployment-595b5b9587-c2mdg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c2mdg webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-c2mdg 2fc999b4-3e28-44ec-b0d5-e541b1aa8cd9 3071867 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc0042844e7 0xc0042844e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.826: INFO: Pod "webserver-deployment-595b5b9587-c2whz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c2whz webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-c2whz f189783e-6c2a-4545-af0a-0756b57f3972 3071869 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc004284607 0xc004284608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.826: INFO: Pod "webserver-deployment-595b5b9587-fvwd5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fvwd5 webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-fvwd5 bc5464be-4283-4ce3-8ddd-2ec3be74f6dd 3071698 0 2020-03-27 00:03:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc004284727 0xc004284728}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.64,StartTime:2020-03-27 00:03:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:03:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://42727e4fc7b2c1ff296837c109bf0c6b373cec9e45bb3722aa9d4754ea194181,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.826: INFO: Pod "webserver-deployment-595b5b9587-gdkb2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gdkb2 webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-gdkb2 7cf3b922-bb6c-4167-acba-ada99d03fc6a 3071865 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc0042848a7 0xc0042848a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.827: INFO: Pod "webserver-deployment-595b5b9587-md6kw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-md6kw webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-md6kw ba54e3a4-4a45-4e6f-ad36-756910b90979 3071834 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc0042849e7 0xc0042849e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.827: INFO: Pod "webserver-deployment-595b5b9587-mvxzj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mvxzj webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-mvxzj 97d8643d-aeb2-406f-be9d-6edd425c4301 3071752 0 2020-03-27 00:03:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc004284b57 0xc004284b58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.201,StartTime:2020-03-27 00:03:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:03:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://80fabfc425f1b9b91fe577e12a5b68eb5261cf09deffbdbfb52b5abe463c902c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.827: INFO: Pod "webserver-deployment-595b5b9587-plgp9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-plgp9 webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-plgp9 83f9c2b0-d814-40f5-b54e-c8d2ae4aa42d 3071709 0 2020-03-27 00:03:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc004284e27 0xc004284e28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.65,StartTime:2020-03-27 00:03:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:03:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8b1e7773d9dc32e1b938d58d128f593dec34703852d5be424bd7cbc43780c42e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.827: INFO: Pod "webserver-deployment-595b5b9587-qmtpv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qmtpv webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-qmtpv a7717983-a74e-4e9f-9963-ba2449aa9c5a 3071870 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc004284fd7 0xc004284fd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.827: INFO: Pod "webserver-deployment-595b5b9587-rc9th" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rc9th webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-rc9th 1f43cf25-d9d4-4bf7-ac21-8f262a009d02 3071700 0 2020-03-27 00:03:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc0042850f7 0xc0042850f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.198,StartTime:2020-03-27 00:03:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:03:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://877422fbcf6e5e6bfabde8a0700a27aefec7cd08d80dbbe3291dc15d0249b402,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.198,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.828: INFO: Pod "webserver-deployment-595b5b9587-rj4sl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rj4sl webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-rj4sl 2bb038d9-95c0-4d8c-945d-a116cbd6bb1c 3071743 0 2020-03-27 00:03:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc004285287 0xc004285288}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.67,StartTime:2020-03-27 00:03:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:03:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1ad269dcc2140c02bcf108553f27571aacbce15990df122da813f613b41deede,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.828: INFO: Pod "webserver-deployment-595b5b9587-vntzm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vntzm webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-vntzm 72575fd8-2e5b-4bf7-bf6f-cfc72ce3ac30 3071841 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc004285577 0xc004285578}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.828: INFO: Pod "webserver-deployment-595b5b9587-xtcf4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xtcf4 webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-xtcf4 9646b5be-5805-4f1b-8200-fec1d3145130 3071842 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc004285697 0xc004285698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.828: INFO: Pod "webserver-deployment-595b5b9587-z87tx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z87tx webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-z87tx 998d476c-6f65-4a47-b8a2-58d05665b1db 3071866 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc0042857b7 0xc0042857b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.828: INFO: Pod "webserver-deployment-595b5b9587-zrc9d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zrc9d webserver-deployment-595b5b9587- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-595b5b9587-zrc9d a96553e6-d86a-4929-b26d-2c7580ffb0e5 3071873 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dfde9940-bed2-4476-bc34-86d76df19716 0xc0042858d7 0xc0042858d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-27 00:03:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.828: INFO: Pod "webserver-deployment-c7997dcc8-28twf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-28twf webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-28twf 2788c9fe-8d53-443c-9e00-41be29c4affe 3071864 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc004285a37 0xc004285a38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.829: INFO: Pod "webserver-deployment-c7997dcc8-6wlww" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6wlww webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-6wlww 1b4dd46a-5dee-4655-8f0b-518d4e977a40 3071785 0 2020-03-27 00:03:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc004285b67 0xc004285b68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-27 00:03:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.829: INFO: Pod "webserver-deployment-c7997dcc8-7lbfc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7lbfc webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-7lbfc 3880b55a-ec03-4d2d-838f-c4ed98988b3c 3071868 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc004285ce7 0xc004285ce8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.829: INFO: Pod "webserver-deployment-c7997dcc8-dw2nh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dw2nh webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-dw2nh 61afc166-a970-462a-928d-da6c390247cf 3071875 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc004285e17 0xc004285e18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.829: INFO: Pod "webserver-deployment-c7997dcc8-kc9mj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kc9mj webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-kc9mj 27cbc2f8-00a3-46d7-a814-7a973c2b4cb3 3071845 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc004285f57 0xc004285f58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.829: INFO: Pod "webserver-deployment-c7997dcc8-lkz4k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lkz4k webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-lkz4k a3b99ea9-f0d6-4456-8684-98f37a1ee4ed 3071803 0 2020-03-27 00:03:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc0024640f7 0xc0024640f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-27 00:03:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.829: INFO: Pod "webserver-deployment-c7997dcc8-pvk8q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pvk8q webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-pvk8q c76859b1-ed06-41e1-be32-2bd3d70970e2 3071798 0 2020-03-27 00:03:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc002464487 0xc002464488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-27 00:03:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.830: INFO: Pod "webserver-deployment-c7997dcc8-qdxrt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qdxrt webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-qdxrt 83305568-31df-4ebd-8339-442b79039133 3071859 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc002464697 0xc002464698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.830: INFO: Pod "webserver-deployment-c7997dcc8-qgmsz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qgmsz webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-qgmsz 54516b34-dd54-4f6c-850b-ba6a1fef39bd 3071860 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc0024648f7 0xc0024648f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.830: INFO: Pod "webserver-deployment-c7997dcc8-rkgqp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rkgqp webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-rkgqp 94faad12-d7bf-4515-8f6d-3afa01c55001 3071806 0 2020-03-27 00:03:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc002464b07 0xc002464b08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-27 00:03:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.830: INFO: Pod "webserver-deployment-c7997dcc8-sr9fz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sr9fz webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-sr9fz 9db6fefa-be0c-4a6b-b4bf-5dedcc0e0b5a 3071840 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc002464e57 0xc002464e58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.830: INFO: Pod "webserver-deployment-c7997dcc8-sv5gm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sv5gm webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-sv5gm 3fadb080-be70-494f-9ebe-149688524d4f 3071890 0 2020-03-27 00:03:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc002465017 0xc002465018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-27 00:03:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:03:34.831: INFO: Pod "webserver-deployment-c7997dcc8-xwzc8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xwzc8 webserver-deployment-c7997dcc8- deployment-6104 /api/v1/namespaces/deployment-6104/pods/webserver-deployment-c7997dcc8-xwzc8 4e829db0-a798-4ef3-9075-d9724832d4ed 3071783 0 2020-03-27 00:03:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 680f584a-102a-430c-8319-93524f79bb5c 0xc0024652a7 0xc0024652a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvzh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvzh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:03:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-27 00:03:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:03:34.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6104" for this suite. • [SLOW TEST:12.948 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":75,"skipped":1175,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:03:34.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-8769b85b-1769-4dbc-a526-a4b0861135d7 in namespace container-probe-8734 Mar 27 00:03:51.406: INFO: Started pod liveness-8769b85b-1769-4dbc-a526-a4b0861135d7 in namespace container-probe-8734 STEP: checking the pod's current state and verifying that restartCount is present Mar 27 00:03:51.450: INFO: Initial restart count of pod liveness-8769b85b-1769-4dbc-a526-a4b0861135d7 is 0 Mar 27 00:04:15.548: INFO: Restart count of pod container-probe-8734/liveness-8769b85b-1769-4dbc-a526-a4b0861135d7 is now 1 (24.097641794s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:04:15.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8734" for this suite. • [SLOW TEST:40.622 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1182,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:04:15.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:04:15.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa23753f-f4de-4497-8482-d54f91d9b9fd" in namespace "projected-2560" to be "Succeeded or Failed" Mar 27 00:04:15.677: INFO: Pod "downwardapi-volume-fa23753f-f4de-4497-8482-d54f91d9b9fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223852ms Mar 27 00:04:17.682: INFO: Pod "downwardapi-volume-fa23753f-f4de-4497-8482-d54f91d9b9fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008534689s Mar 27 00:04:19.685: INFO: Pod "downwardapi-volume-fa23753f-f4de-4497-8482-d54f91d9b9fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011801531s STEP: Saw pod success Mar 27 00:04:19.685: INFO: Pod "downwardapi-volume-fa23753f-f4de-4497-8482-d54f91d9b9fd" satisfied condition "Succeeded or Failed" Mar 27 00:04:19.688: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fa23753f-f4de-4497-8482-d54f91d9b9fd container client-container: STEP: delete the pod Mar 27 00:04:19.721: INFO: Waiting for pod downwardapi-volume-fa23753f-f4de-4497-8482-d54f91d9b9fd to disappear Mar 27 00:04:19.730: INFO: Pod downwardapi-volume-fa23753f-f4de-4497-8482-d54f91d9b9fd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:04:19.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2560" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1183,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:04:19.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1139.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1139.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 27 00:04:25.851: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:25.855: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:25.858: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:25.861: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:25.870: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:25.873: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:25.877: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:25.880: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:25.886: INFO: Lookups using dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local] Mar 27 00:04:30.891: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:30.895: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:30.899: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:30.902: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:30.911: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:30.914: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:30.917: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:30.920: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:30.926: INFO: Lookups using dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local] Mar 27 00:04:35.898: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:35.901: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:35.904: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:35.907: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:35.918: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:35.921: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:35.923: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:35.926: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:35.935: INFO: Lookups using dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local] Mar 27 00:04:40.891: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:40.895: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:40.899: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:40.903: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:40.911: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:40.914: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:40.917: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:40.920: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:40.926: INFO: Lookups using dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local] Mar 27 00:04:45.891: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:45.895: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:45.899: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:45.902: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:45.913: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:45.916: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:45.919: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:45.921: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:45.927: INFO: Lookups using dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local] Mar 27 00:04:50.891: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:50.894: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:50.898: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:50.901: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:50.910: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:50.913: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:50.916: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:50.919: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local from pod dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc: the server could not find the requested resource (get pods dns-test-81886c93-4f10-4106-8c63-03ab979409bc) Mar 27 00:04:50.925: INFO: Lookups using dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1139.svc.cluster.local jessie_udp@dns-test-service-2.dns-1139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1139.svc.cluster.local] Mar 27 00:04:55.929: INFO: DNS probes using dns-1139/dns-test-81886c93-4f10-4106-8c63-03ab979409bc succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:04:56.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1139" for this suite. • [SLOW TEST:36.289 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":78,"skipped":1192,"failed":0} [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:04:56.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-879f66a9-bac8-4238-b90c-a47ff8ae93e0 in namespace container-probe-2183 Mar 27 00:05:00.466: INFO: Started pod liveness-879f66a9-bac8-4238-b90c-a47ff8ae93e0 in namespace container-probe-2183 STEP: checking the pod's current state and verifying that restartCount is present Mar 27 00:05:00.469: INFO: Initial restart count of pod liveness-879f66a9-bac8-4238-b90c-a47ff8ae93e0 is 0 Mar 27 00:05:18.525: INFO: Restart count of pod container-probe-2183/liveness-879f66a9-bac8-4238-b90c-a47ff8ae93e0 is now 1 (18.056193175s elapsed) Mar 27 00:05:38.564: INFO: Restart count of pod container-probe-2183/liveness-879f66a9-bac8-4238-b90c-a47ff8ae93e0 is now 2 (38.095028667s elapsed) Mar 27 00:05:58.604: INFO: Restart count of pod container-probe-2183/liveness-879f66a9-bac8-4238-b90c-a47ff8ae93e0 is now 3 (58.134877591s elapsed) Mar 27 00:06:18.643: INFO: Restart count of pod container-probe-2183/liveness-879f66a9-bac8-4238-b90c-a47ff8ae93e0 is now 4 (1m18.173666639s elapsed) Mar 27 00:07:18.768: INFO: Restart count of pod container-probe-2183/liveness-879f66a9-bac8-4238-b90c-a47ff8ae93e0 is now 5 (2m18.29877049s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:07:18.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2183" for this suite. • [SLOW TEST:142.803 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1192,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:07:18.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:07:18.895: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:07:23.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2769" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1202,"failed":0} ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:07:23.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-9503 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9503 STEP: Deleting pre-stop pod Mar 27 00:07:36.138: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:07:36.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9503" for this suite. • [SLOW TEST:13.160 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":81,"skipped":1202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:07:36.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-f1cac646-c266-425e-892b-cd27785f6683 STEP: Creating a pod to test consume secrets Mar 27 00:07:36.443: INFO: Waiting up to 5m0s for pod "pod-secrets-82b37b28-f85e-4119-b67c-e98e24c2f846" in namespace "secrets-4497" to be "Succeeded or Failed" Mar 27 00:07:36.512: INFO: Pod "pod-secrets-82b37b28-f85e-4119-b67c-e98e24c2f846": Phase="Pending", Reason="", readiness=false. Elapsed: 69.169862ms Mar 27 00:07:38.516: INFO: Pod "pod-secrets-82b37b28-f85e-4119-b67c-e98e24c2f846": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073399253s Mar 27 00:07:40.519: INFO: Pod "pod-secrets-82b37b28-f85e-4119-b67c-e98e24c2f846": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076474121s STEP: Saw pod success Mar 27 00:07:40.519: INFO: Pod "pod-secrets-82b37b28-f85e-4119-b67c-e98e24c2f846" satisfied condition "Succeeded or Failed" Mar 27 00:07:40.522: INFO: Trying to get logs from node latest-worker pod pod-secrets-82b37b28-f85e-4119-b67c-e98e24c2f846 container secret-volume-test: STEP: delete the pod Mar 27 00:07:40.563: INFO: Waiting for pod pod-secrets-82b37b28-f85e-4119-b67c-e98e24c2f846 to disappear Mar 27 00:07:40.566: INFO: Pod pod-secrets-82b37b28-f85e-4119-b67c-e98e24c2f846 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:07:40.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4497" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1241,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:07:40.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:07:40.647: INFO: Waiting up to 5m0s for pod "downwardapi-volume-550e433b-713e-4db7-8663-1cd619610e98" in namespace "projected-9165" to be "Succeeded or Failed" Mar 27 00:07:40.651: INFO: Pod "downwardapi-volume-550e433b-713e-4db7-8663-1cd619610e98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086548ms Mar 27 00:07:42.654: INFO: Pod "downwardapi-volume-550e433b-713e-4db7-8663-1cd619610e98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007581185s Mar 27 00:07:44.659: INFO: Pod "downwardapi-volume-550e433b-713e-4db7-8663-1cd619610e98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011880296s STEP: Saw pod success Mar 27 00:07:44.659: INFO: Pod "downwardapi-volume-550e433b-713e-4db7-8663-1cd619610e98" satisfied condition "Succeeded or Failed" Mar 27 00:07:44.667: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-550e433b-713e-4db7-8663-1cd619610e98 container client-container: STEP: delete the pod Mar 27 00:07:44.681: INFO: Waiting for pod downwardapi-volume-550e433b-713e-4db7-8663-1cd619610e98 to disappear Mar 27 00:07:44.685: INFO: Pod downwardapi-volume-550e433b-713e-4db7-8663-1cd619610e98 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:07:44.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9165" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:07:44.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 27 00:07:49.298: INFO: Successfully updated pod "adopt-release-qsfz7" STEP: Checking that the Job readopts the Pod Mar 27 00:07:49.298: INFO: Waiting up to 15m0s for pod "adopt-release-qsfz7" in namespace "job-8713" to be "adopted" Mar 27 00:07:49.302: INFO: Pod "adopt-release-qsfz7": Phase="Running", Reason="", readiness=true. Elapsed: 4.055654ms Mar 27 00:07:51.306: INFO: Pod "adopt-release-qsfz7": Phase="Running", Reason="", readiness=true. Elapsed: 2.008093468s Mar 27 00:07:51.306: INFO: Pod "adopt-release-qsfz7" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 27 00:07:51.814: INFO: Successfully updated pod "adopt-release-qsfz7" STEP: Checking that the Job releases the Pod Mar 27 00:07:51.814: INFO: Waiting up to 15m0s for pod "adopt-release-qsfz7" in namespace "job-8713" to be "released" Mar 27 00:07:51.823: INFO: Pod "adopt-release-qsfz7": Phase="Running", Reason="", readiness=true. Elapsed: 8.991173ms Mar 27 00:07:53.826: INFO: Pod "adopt-release-qsfz7": Phase="Running", Reason="", readiness=true. Elapsed: 2.012118424s Mar 27 00:07:53.826: INFO: Pod "adopt-release-qsfz7" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:07:53.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8713" for this suite. • [SLOW TEST:9.141 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":84,"skipped":1296,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:07:53.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Mar 27 00:07:54.417: INFO: created pod pod-service-account-defaultsa Mar 27 00:07:54.417: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 27 00:07:54.424: INFO: created pod pod-service-account-mountsa Mar 27 00:07:54.424: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 27 00:07:54.452: INFO: created pod pod-service-account-nomountsa Mar 27 00:07:54.452: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 27 00:07:54.472: INFO: created pod pod-service-account-defaultsa-mountspec Mar 27 00:07:54.472: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 27 00:07:54.535: INFO: created pod pod-service-account-mountsa-mountspec Mar 27 00:07:54.535: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 27 00:07:54.566: INFO: created pod pod-service-account-nomountsa-mountspec Mar 27 00:07:54.566: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 27 00:07:54.599: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 27 00:07:54.599: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 27 00:07:54.661: INFO: created pod pod-service-account-mountsa-nomountspec Mar 27 00:07:54.661: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 27 00:07:54.682: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 27 00:07:54.682: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:07:54.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4972" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":85,"skipped":1309,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:07:54.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-vlnk STEP: Creating a pod to test atomic-volume-subpath Mar 27 00:07:54.904: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vlnk" in namespace "subpath-3554" to be "Succeeded or Failed" Mar 27 00:07:54.908: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053663ms Mar 27 00:07:56.912: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007644149s Mar 27 00:07:59.044: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139635568s Mar 27 00:08:01.458: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.553332744s Mar 27 00:08:03.926: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Pending", Reason="", readiness=false. Elapsed: 9.021265311s Mar 27 00:08:06.027: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Running", Reason="", readiness=true. Elapsed: 11.122282711s Mar 27 00:08:08.044: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Running", Reason="", readiness=true. Elapsed: 13.13970091s Mar 27 00:08:10.048: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Running", Reason="", readiness=true. Elapsed: 15.144122175s Mar 27 00:08:12.053: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Running", Reason="", readiness=true. Elapsed: 17.148540587s Mar 27 00:08:14.068: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Running", Reason="", readiness=true. Elapsed: 19.16400257s Mar 27 00:08:16.072: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Running", Reason="", readiness=true. Elapsed: 21.168061224s Mar 27 00:08:18.076: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Running", Reason="", readiness=true. Elapsed: 23.172198301s Mar 27 00:08:20.081: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Running", Reason="", readiness=true. Elapsed: 25.176279836s Mar 27 00:08:22.085: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Running", Reason="", readiness=true. Elapsed: 27.180557983s Mar 27 00:08:24.089: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Running", Reason="", readiness=true. Elapsed: 29.184840344s Mar 27 00:08:26.093: INFO: Pod "pod-subpath-test-projected-vlnk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.18889653s STEP: Saw pod success Mar 27 00:08:26.093: INFO: Pod "pod-subpath-test-projected-vlnk" satisfied condition "Succeeded or Failed" Mar 27 00:08:26.096: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-vlnk container test-container-subpath-projected-vlnk: STEP: delete the pod Mar 27 00:08:26.137: INFO: Waiting for pod pod-subpath-test-projected-vlnk to disappear Mar 27 00:08:26.164: INFO: Pod pod-subpath-test-projected-vlnk no longer exists STEP: Deleting pod pod-subpath-test-projected-vlnk Mar 27 00:08:26.164: INFO: Deleting pod "pod-subpath-test-projected-vlnk" in namespace "subpath-3554" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:08:26.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3554" for this suite. • [SLOW TEST:31.433 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":86,"skipped":1321,"failed":0} S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:08:26.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 27 00:08:36.263: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4054 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:08:36.263: INFO: >>> kubeConfig: /root/.kube/config I0327 00:08:36.296613 7 log.go:172] (0xc002c0a840) (0xc000c9b4a0) Create stream I0327 00:08:36.296648 7 log.go:172] (0xc002c0a840) (0xc000c9b4a0) Stream added, broadcasting: 1 I0327 00:08:36.304298 7 log.go:172] (0xc002c0a840) Reply frame received for 1 I0327 00:08:36.304344 7 log.go:172] (0xc002c0a840) (0xc000b9c140) Create stream I0327 00:08:36.304363 7 log.go:172] (0xc002c0a840) (0xc000b9c140) Stream added, broadcasting: 3 I0327 00:08:36.309458 7 log.go:172] (0xc002c0a840) Reply frame received for 3 I0327 00:08:36.309498 7 log.go:172] (0xc002c0a840) (0xc000c9b680) Create stream I0327 00:08:36.309513 7 log.go:172] (0xc002c0a840) (0xc000c9b680) Stream added, broadcasting: 5 I0327 00:08:36.310345 7 log.go:172] (0xc002c0a840) Reply frame received for 5 I0327 00:08:36.382896 7 log.go:172] (0xc002c0a840) Data frame received for 5 I0327 00:08:36.382940 7 log.go:172] (0xc000c9b680) (5) Data frame handling I0327 00:08:36.382968 7 log.go:172] (0xc002c0a840) Data frame received for 3 I0327 00:08:36.382984 7 log.go:172] (0xc000b9c140) (3) Data frame handling I0327 00:08:36.383000 7 log.go:172] (0xc000b9c140) (3) Data frame sent I0327 00:08:36.383014 7 log.go:172] (0xc002c0a840) Data frame received for 3 I0327 00:08:36.383027 7 log.go:172] (0xc000b9c140) (3) Data frame handling I0327 00:08:36.384165 7 log.go:172] (0xc002c0a840) Data frame received for 1 I0327 00:08:36.384189 7 log.go:172] (0xc000c9b4a0) (1) Data frame handling I0327 00:08:36.384210 7 log.go:172] (0xc000c9b4a0) (1) Data frame sent I0327 00:08:36.384242 7 log.go:172] (0xc002c0a840) (0xc000c9b4a0) Stream removed, broadcasting: 1 I0327 00:08:36.384260 7 log.go:172] (0xc002c0a840) Go away received I0327 00:08:36.384406 7 log.go:172] (0xc002c0a840) (0xc000c9b4a0) Stream removed, broadcasting: 1 I0327 00:08:36.384424 7 log.go:172] (0xc002c0a840) (0xc000b9c140) Stream removed, broadcasting: 3 I0327 00:08:36.384433 7 log.go:172] (0xc002c0a840) (0xc000c9b680) Stream removed, broadcasting: 5 Mar 27 00:08:36.384: INFO: Exec stderr: "" Mar 27 00:08:36.384: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4054 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:08:36.384: INFO: >>> kubeConfig: /root/.kube/config I0327 00:08:36.415796 7 log.go:172] (0xc002cb46e0) (0xc000b9c8c0) Create stream I0327 00:08:36.415825 7 log.go:172] (0xc002cb46e0) (0xc000b9c8c0) Stream added, broadcasting: 1 I0327 00:08:36.418256 7 log.go:172] (0xc002cb46e0) Reply frame received for 1 I0327 00:08:36.418305 7 log.go:172] (0xc002cb46e0) (0xc000b9caa0) Create stream I0327 00:08:36.418322 7 log.go:172] (0xc002cb46e0) (0xc000b9caa0) Stream added, broadcasting: 3 I0327 00:08:36.419270 7 log.go:172] (0xc002cb46e0) Reply frame received for 3 I0327 00:08:36.419304 7 log.go:172] (0xc002cb46e0) (0xc000b9cb40) Create stream I0327 00:08:36.419317 7 log.go:172] (0xc002cb46e0) (0xc000b9cb40) Stream added, broadcasting: 5 I0327 00:08:36.420309 7 log.go:172] (0xc002cb46e0) Reply frame received for 5 I0327 00:08:36.479703 7 log.go:172] (0xc002cb46e0) Data frame received for 5 I0327 00:08:36.479741 7 log.go:172] (0xc000b9cb40) (5) Data frame handling I0327 00:08:36.479766 7 log.go:172] (0xc002cb46e0) Data frame received for 3 I0327 00:08:36.479779 7 log.go:172] (0xc000b9caa0) (3) Data frame handling I0327 00:08:36.479797 7 log.go:172] (0xc000b9caa0) (3) Data frame sent I0327 00:08:36.479812 7 log.go:172] (0xc002cb46e0) Data frame received for 3 I0327 00:08:36.479824 7 log.go:172] (0xc000b9caa0) (3) Data frame handling I0327 00:08:36.480794 7 log.go:172] (0xc002cb46e0) Data frame received for 1 I0327 00:08:36.480821 7 log.go:172] (0xc000b9c8c0) (1) Data frame handling I0327 00:08:36.480834 7 log.go:172] (0xc000b9c8c0) (1) Data frame sent I0327 00:08:36.480858 7 log.go:172] (0xc002cb46e0) (0xc000b9c8c0) Stream removed, broadcasting: 1 I0327 00:08:36.480886 7 log.go:172] (0xc002cb46e0) Go away received I0327 00:08:36.481023 7 log.go:172] (0xc002cb46e0) (0xc000b9c8c0) Stream removed, broadcasting: 1 I0327 00:08:36.481049 7 log.go:172] (0xc002cb46e0) (0xc000b9caa0) Stream removed, broadcasting: 3 I0327 00:08:36.481071 7 log.go:172] (0xc002cb46e0) (0xc000b9cb40) Stream removed, broadcasting: 5 Mar 27 00:08:36.481: INFO: Exec stderr: "" Mar 27 00:08:36.481: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4054 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:08:36.481: INFO: >>> kubeConfig: /root/.kube/config I0327 00:08:36.512906 7 log.go:172] (0xc0026c8630) (0xc0016c85a0) Create stream I0327 00:08:36.512925 7 log.go:172] (0xc0026c8630) (0xc0016c85a0) Stream added, broadcasting: 1 I0327 00:08:36.514941 7 log.go:172] (0xc0026c8630) Reply frame received for 1 I0327 00:08:36.514984 7 log.go:172] (0xc0026c8630) (0xc0016c8780) Create stream I0327 00:08:36.515010 7 log.go:172] (0xc0026c8630) (0xc0016c8780) Stream added, broadcasting: 3 I0327 00:08:36.516055 7 log.go:172] (0xc0026c8630) Reply frame received for 3 I0327 00:08:36.516098 7 log.go:172] (0xc0026c8630) (0xc0016c88c0) Create stream I0327 00:08:36.516115 7 log.go:172] (0xc0026c8630) (0xc0016c88c0) Stream added, broadcasting: 5 I0327 00:08:36.516929 7 log.go:172] (0xc0026c8630) Reply frame received for 5 I0327 00:08:36.577605 7 log.go:172] (0xc0026c8630) Data frame received for 3 I0327 00:08:36.577636 7 log.go:172] (0xc0016c8780) (3) Data frame handling I0327 00:08:36.577647 7 log.go:172] (0xc0016c8780) (3) Data frame sent I0327 00:08:36.577659 7 log.go:172] (0xc0026c8630) Data frame received for 3 I0327 00:08:36.577669 7 log.go:172] (0xc0016c8780) (3) Data frame handling I0327 00:08:36.577689 7 log.go:172] (0xc0026c8630) Data frame received for 5 I0327 00:08:36.577699 7 log.go:172] (0xc0016c88c0) (5) Data frame handling I0327 00:08:36.579030 7 log.go:172] (0xc0026c8630) Data frame received for 1 I0327 00:08:36.579043 7 log.go:172] (0xc0016c85a0) (1) Data frame handling I0327 00:08:36.579049 7 log.go:172] (0xc0016c85a0) (1) Data frame sent I0327 00:08:36.579058 7 log.go:172] (0xc0026c8630) (0xc0016c85a0) Stream removed, broadcasting: 1 I0327 00:08:36.579068 7 log.go:172] (0xc0026c8630) Go away received I0327 00:08:36.579250 7 log.go:172] (0xc0026c8630) (0xc0016c85a0) Stream removed, broadcasting: 1 I0327 00:08:36.579285 7 log.go:172] (0xc0026c8630) (0xc0016c8780) Stream removed, broadcasting: 3 I0327 00:08:36.579310 7 log.go:172] (0xc0026c8630) (0xc0016c88c0) Stream removed, broadcasting: 5 Mar 27 00:08:36.579: INFO: Exec stderr: "" Mar 27 00:08:36.579: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4054 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:08:36.579: INFO: >>> kubeConfig: /root/.kube/config I0327 00:08:36.619011 7 log.go:172] (0xc0028b1130) (0xc000c38a00) Create stream I0327 00:08:36.619037 7 log.go:172] (0xc0028b1130) (0xc000c38a00) Stream added, broadcasting: 1 I0327 00:08:36.621716 7 log.go:172] (0xc0028b1130) Reply frame received for 1 I0327 00:08:36.621742 7 log.go:172] (0xc0028b1130) (0xc0016c8960) Create stream I0327 00:08:36.621750 7 log.go:172] (0xc0028b1130) (0xc0016c8960) Stream added, broadcasting: 3 I0327 00:08:36.622638 7 log.go:172] (0xc0028b1130) Reply frame received for 3 I0327 00:08:36.622665 7 log.go:172] (0xc0028b1130) (0xc000c38be0) Create stream I0327 00:08:36.622675 7 log.go:172] (0xc0028b1130) (0xc000c38be0) Stream added, broadcasting: 5 I0327 00:08:36.623584 7 log.go:172] (0xc0028b1130) Reply frame received for 5 I0327 00:08:36.671325 7 log.go:172] (0xc0028b1130) Data frame received for 5 I0327 00:08:36.671378 7 log.go:172] (0xc000c38be0) (5) Data frame handling I0327 00:08:36.671429 7 log.go:172] (0xc0028b1130) Data frame received for 3 I0327 00:08:36.671463 7 log.go:172] (0xc0016c8960) (3) Data frame handling I0327 00:08:36.671484 7 log.go:172] (0xc0016c8960) (3) Data frame sent I0327 00:08:36.671497 7 log.go:172] (0xc0028b1130) Data frame received for 3 I0327 00:08:36.671511 7 log.go:172] (0xc0016c8960) (3) Data frame handling I0327 00:08:36.673785 7 log.go:172] (0xc0028b1130) Data frame received for 1 I0327 00:08:36.673808 7 log.go:172] (0xc000c38a00) (1) Data frame handling I0327 00:08:36.673830 7 log.go:172] (0xc000c38a00) (1) Data frame sent I0327 00:08:36.673851 7 log.go:172] (0xc0028b1130) (0xc000c38a00) Stream removed, broadcasting: 1 I0327 00:08:36.673865 7 log.go:172] (0xc0028b1130) Go away received I0327 00:08:36.674041 7 log.go:172] (0xc0028b1130) (0xc000c38a00) Stream removed, broadcasting: 1 I0327 00:08:36.674070 7 log.go:172] (0xc0028b1130) (0xc0016c8960) Stream removed, broadcasting: 3 I0327 00:08:36.674100 7 log.go:172] (0xc0028b1130) (0xc000c38be0) Stream removed, broadcasting: 5 Mar 27 00:08:36.674: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 27 00:08:36.674: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4054 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:08:36.674: INFO: >>> kubeConfig: /root/.kube/config I0327 00:08:36.703640 7 log.go:172] (0xc0026c8f20) (0xc0016c8be0) Create stream I0327 00:08:36.703676 7 log.go:172] (0xc0026c8f20) (0xc0016c8be0) Stream added, broadcasting: 1 I0327 00:08:36.706342 7 log.go:172] (0xc0026c8f20) Reply frame received for 1 I0327 00:08:36.706375 7 log.go:172] (0xc0026c8f20) (0xc001ac26e0) Create stream I0327 00:08:36.706384 7 log.go:172] (0xc0026c8f20) (0xc001ac26e0) Stream added, broadcasting: 3 I0327 00:08:36.707090 7 log.go:172] (0xc0026c8f20) Reply frame received for 3 I0327 00:08:36.707114 7 log.go:172] (0xc0026c8f20) (0xc000c38dc0) Create stream I0327 00:08:36.707122 7 log.go:172] (0xc0026c8f20) (0xc000c38dc0) Stream added, broadcasting: 5 I0327 00:08:36.707813 7 log.go:172] (0xc0026c8f20) Reply frame received for 5 I0327 00:08:36.769001 7 log.go:172] (0xc0026c8f20) Data frame received for 5 I0327 00:08:36.769430 7 log.go:172] (0xc0026c8f20) Data frame received for 3 I0327 00:08:36.769498 7 log.go:172] (0xc001ac26e0) (3) Data frame handling I0327 00:08:36.769528 7 log.go:172] (0xc001ac26e0) (3) Data frame sent I0327 00:08:36.769549 7 log.go:172] (0xc0026c8f20) Data frame received for 3 I0327 00:08:36.769579 7 log.go:172] (0xc001ac26e0) (3) Data frame handling I0327 00:08:36.769606 7 log.go:172] (0xc000c38dc0) (5) Data frame handling I0327 00:08:36.771203 7 log.go:172] (0xc0026c8f20) Data frame received for 1 I0327 00:08:36.771239 7 log.go:172] (0xc0016c8be0) (1) Data frame handling I0327 00:08:36.771265 7 log.go:172] (0xc0016c8be0) (1) Data frame sent I0327 00:08:36.771288 7 log.go:172] (0xc0026c8f20) (0xc0016c8be0) Stream removed, broadcasting: 1 I0327 00:08:36.771307 7 log.go:172] (0xc0026c8f20) Go away received I0327 00:08:36.771436 7 log.go:172] (0xc0026c8f20) (0xc0016c8be0) Stream removed, broadcasting: 1 I0327 00:08:36.771456 7 log.go:172] (0xc0026c8f20) (0xc001ac26e0) Stream removed, broadcasting: 3 I0327 00:08:36.771474 7 log.go:172] (0xc0026c8f20) (0xc000c38dc0) Stream removed, broadcasting: 5 Mar 27 00:08:36.771: INFO: Exec stderr: "" Mar 27 00:08:36.771: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4054 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:08:36.771: INFO: >>> kubeConfig: /root/.kube/config I0327 00:08:36.806256 7 log.go:172] (0xc0026c9550) (0xc0016c8e60) Create stream I0327 00:08:36.806277 7 log.go:172] (0xc0026c9550) (0xc0016c8e60) Stream added, broadcasting: 1 I0327 00:08:36.808465 7 log.go:172] (0xc0026c9550) Reply frame received for 1 I0327 00:08:36.808502 7 log.go:172] (0xc0026c9550) (0xc000b9cc80) Create stream I0327 00:08:36.808521 7 log.go:172] (0xc0026c9550) (0xc000b9cc80) Stream added, broadcasting: 3 I0327 00:08:36.809561 7 log.go:172] (0xc0026c9550) Reply frame received for 3 I0327 00:08:36.809583 7 log.go:172] (0xc0026c9550) (0xc001ac2780) Create stream I0327 00:08:36.809590 7 log.go:172] (0xc0026c9550) (0xc001ac2780) Stream added, broadcasting: 5 I0327 00:08:36.810624 7 log.go:172] (0xc0026c9550) Reply frame received for 5 I0327 00:08:36.872728 7 log.go:172] (0xc0026c9550) Data frame received for 5 I0327 00:08:36.872756 7 log.go:172] (0xc001ac2780) (5) Data frame handling I0327 00:08:36.872942 7 log.go:172] (0xc0026c9550) Data frame received for 3 I0327 00:08:36.872975 7 log.go:172] (0xc000b9cc80) (3) Data frame handling I0327 00:08:36.873002 7 log.go:172] (0xc000b9cc80) (3) Data frame sent I0327 00:08:36.873021 7 log.go:172] (0xc0026c9550) Data frame received for 3 I0327 00:08:36.873033 7 log.go:172] (0xc000b9cc80) (3) Data frame handling I0327 00:08:36.874675 7 log.go:172] (0xc0026c9550) Data frame received for 1 I0327 00:08:36.874694 7 log.go:172] (0xc0016c8e60) (1) Data frame handling I0327 00:08:36.874705 7 log.go:172] (0xc0016c8e60) (1) Data frame sent I0327 00:08:36.874717 7 log.go:172] (0xc0026c9550) (0xc0016c8e60) Stream removed, broadcasting: 1 I0327 00:08:36.874733 7 log.go:172] (0xc0026c9550) Go away received I0327 00:08:36.874924 7 log.go:172] (0xc0026c9550) (0xc0016c8e60) Stream removed, broadcasting: 1 I0327 00:08:36.874954 7 log.go:172] (0xc0026c9550) (0xc000b9cc80) Stream removed, broadcasting: 3 I0327 00:08:36.874975 7 log.go:172] (0xc0026c9550) (0xc001ac2780) Stream removed, broadcasting: 5 Mar 27 00:08:36.874: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 27 00:08:36.875: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4054 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:08:36.875: INFO: >>> kubeConfig: /root/.kube/config I0327 00:08:36.908068 7 log.go:172] (0xc002cb4dc0) (0xc000b9d2c0) Create stream I0327 00:08:36.908092 7 log.go:172] (0xc002cb4dc0) (0xc000b9d2c0) Stream added, broadcasting: 1 I0327 00:08:36.910449 7 log.go:172] (0xc002cb4dc0) Reply frame received for 1 I0327 00:08:36.910490 7 log.go:172] (0xc002cb4dc0) (0xc000b9d400) Create stream I0327 00:08:36.910505 7 log.go:172] (0xc002cb4dc0) (0xc000b9d400) Stream added, broadcasting: 3 I0327 00:08:36.911698 7 log.go:172] (0xc002cb4dc0) Reply frame received for 3 I0327 00:08:36.911737 7 log.go:172] (0xc002cb4dc0) (0xc000c9b900) Create stream I0327 00:08:36.911749 7 log.go:172] (0xc002cb4dc0) (0xc000c9b900) Stream added, broadcasting: 5 I0327 00:08:36.912740 7 log.go:172] (0xc002cb4dc0) Reply frame received for 5 I0327 00:08:36.966731 7 log.go:172] (0xc002cb4dc0) Data frame received for 5 I0327 00:08:36.966794 7 log.go:172] (0xc000c9b900) (5) Data frame handling I0327 00:08:36.966834 7 log.go:172] (0xc002cb4dc0) Data frame received for 3 I0327 00:08:36.966857 7 log.go:172] (0xc000b9d400) (3) Data frame handling I0327 00:08:36.966888 7 log.go:172] (0xc000b9d400) (3) Data frame sent I0327 00:08:36.966925 7 log.go:172] (0xc002cb4dc0) Data frame received for 3 I0327 00:08:36.966946 7 log.go:172] (0xc000b9d400) (3) Data frame handling I0327 00:08:36.968369 7 log.go:172] (0xc002cb4dc0) Data frame received for 1 I0327 00:08:36.968384 7 log.go:172] (0xc000b9d2c0) (1) Data frame handling I0327 00:08:36.968394 7 log.go:172] (0xc000b9d2c0) (1) Data frame sent I0327 00:08:36.968434 7 log.go:172] (0xc002cb4dc0) (0xc000b9d2c0) Stream removed, broadcasting: 1 I0327 00:08:36.968520 7 log.go:172] (0xc002cb4dc0) (0xc000b9d2c0) Stream removed, broadcasting: 1 I0327 00:08:36.968541 7 log.go:172] (0xc002cb4dc0) (0xc000b9d400) Stream removed, broadcasting: 3 I0327 00:08:36.968631 7 log.go:172] (0xc002cb4dc0) Go away received I0327 00:08:36.968662 7 log.go:172] (0xc002cb4dc0) (0xc000c9b900) Stream removed, broadcasting: 5 Mar 27 00:08:36.968: INFO: Exec stderr: "" Mar 27 00:08:36.968: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4054 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:08:36.968: INFO: >>> kubeConfig: /root/.kube/config I0327 00:08:36.999909 7 log.go:172] (0xc0026c9b80) (0xc0016c92c0) Create stream I0327 00:08:36.999942 7 log.go:172] (0xc0026c9b80) (0xc0016c92c0) Stream added, broadcasting: 1 I0327 00:08:37.002740 7 log.go:172] (0xc0026c9b80) Reply frame received for 1 I0327 00:08:37.002784 7 log.go:172] (0xc0026c9b80) (0xc001ac28c0) Create stream I0327 00:08:37.002798 7 log.go:172] (0xc0026c9b80) (0xc001ac28c0) Stream added, broadcasting: 3 I0327 00:08:37.003735 7 log.go:172] (0xc0026c9b80) Reply frame received for 3 I0327 00:08:37.003776 7 log.go:172] (0xc0026c9b80) (0xc001ac2960) Create stream I0327 00:08:37.003791 7 log.go:172] (0xc0026c9b80) (0xc001ac2960) Stream added, broadcasting: 5 I0327 00:08:37.004993 7 log.go:172] (0xc0026c9b80) Reply frame received for 5 I0327 00:08:37.064553 7 log.go:172] (0xc0026c9b80) Data frame received for 3 I0327 00:08:37.064697 7 log.go:172] (0xc001ac28c0) (3) Data frame handling I0327 00:08:37.064716 7 log.go:172] (0xc001ac28c0) (3) Data frame sent I0327 00:08:37.064807 7 log.go:172] (0xc0026c9b80) Data frame received for 3 I0327 00:08:37.064825 7 log.go:172] (0xc001ac28c0) (3) Data frame handling I0327 00:08:37.064929 7 log.go:172] (0xc0026c9b80) Data frame received for 5 I0327 00:08:37.064959 7 log.go:172] (0xc001ac2960) (5) Data frame handling I0327 00:08:37.066142 7 log.go:172] (0xc0026c9b80) Data frame received for 1 I0327 00:08:37.066177 7 log.go:172] (0xc0016c92c0) (1) Data frame handling I0327 00:08:37.066216 7 log.go:172] (0xc0016c92c0) (1) Data frame sent I0327 00:08:37.066544 7 log.go:172] (0xc0026c9b80) (0xc0016c92c0) Stream removed, broadcasting: 1 I0327 00:08:37.066621 7 log.go:172] (0xc0026c9b80) Go away received I0327 00:08:37.066670 7 log.go:172] (0xc0026c9b80) (0xc0016c92c0) Stream removed, broadcasting: 1 I0327 00:08:37.066726 7 log.go:172] (0xc0026c9b80) (0xc001ac28c0) Stream removed, broadcasting: 3 I0327 00:08:37.066749 7 log.go:172] (0xc0026c9b80) (0xc001ac2960) Stream removed, broadcasting: 5 Mar 27 00:08:37.066: INFO: Exec stderr: "" Mar 27 00:08:37.066: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4054 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:08:37.066: INFO: >>> kubeConfig: /root/.kube/config I0327 00:08:37.099968 7 log.go:172] (0xc002c0ae70) (0xc00117c320) Create stream I0327 00:08:37.099990 7 log.go:172] (0xc002c0ae70) (0xc00117c320) Stream added, broadcasting: 1 I0327 00:08:37.103157 7 log.go:172] (0xc002c0ae70) Reply frame received for 1 I0327 00:08:37.103201 7 log.go:172] (0xc002c0ae70) (0xc000b9d540) Create stream I0327 00:08:37.103214 7 log.go:172] (0xc002c0ae70) (0xc000b9d540) Stream added, broadcasting: 3 I0327 00:08:37.104383 7 log.go:172] (0xc002c0ae70) Reply frame received for 3 I0327 00:08:37.104421 7 log.go:172] (0xc002c0ae70) (0xc000c39220) Create stream I0327 00:08:37.104438 7 log.go:172] (0xc002c0ae70) (0xc000c39220) Stream added, broadcasting: 5 I0327 00:08:37.105801 7 log.go:172] (0xc002c0ae70) Reply frame received for 5 I0327 00:08:37.168751 7 log.go:172] (0xc002c0ae70) Data frame received for 5 I0327 00:08:37.168769 7 log.go:172] (0xc000c39220) (5) Data frame handling I0327 00:08:37.168799 7 log.go:172] (0xc002c0ae70) Data frame received for 3 I0327 00:08:37.168834 7 log.go:172] (0xc000b9d540) (3) Data frame handling I0327 00:08:37.168872 7 log.go:172] (0xc000b9d540) (3) Data frame sent I0327 00:08:37.168889 7 log.go:172] (0xc002c0ae70) Data frame received for 3 I0327 00:08:37.168902 7 log.go:172] (0xc000b9d540) (3) Data frame handling I0327 00:08:37.170468 7 log.go:172] (0xc002c0ae70) Data frame received for 1 I0327 00:08:37.170490 7 log.go:172] (0xc00117c320) (1) Data frame handling I0327 00:08:37.170518 7 log.go:172] (0xc00117c320) (1) Data frame sent I0327 00:08:37.170540 7 log.go:172] (0xc002c0ae70) (0xc00117c320) Stream removed, broadcasting: 1 I0327 00:08:37.170554 7 log.go:172] (0xc002c0ae70) Go away received I0327 00:08:37.170759 7 log.go:172] (0xc002c0ae70) (0xc00117c320) Stream removed, broadcasting: 1 I0327 00:08:37.170790 7 log.go:172] (0xc002c0ae70) (0xc000b9d540) Stream removed, broadcasting: 3 I0327 00:08:37.170803 7 log.go:172] (0xc002c0ae70) (0xc000c39220) Stream removed, broadcasting: 5 Mar 27 00:08:37.170: INFO: Exec stderr: "" Mar 27 00:08:37.170: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4054 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:08:37.170: INFO: >>> kubeConfig: /root/.kube/config I0327 00:08:37.206033 7 log.go:172] (0xc002cb5080) (0xc000b9d5e0) Create stream I0327 00:08:37.206061 7 log.go:172] (0xc002cb5080) (0xc000b9d5e0) Stream added, broadcasting: 1 I0327 00:08:37.209355 7 log.go:172] (0xc002cb5080) Reply frame received for 1 I0327 00:08:37.209432 7 log.go:172] (0xc002cb5080) (0xc000b9d900) Create stream I0327 00:08:37.209453 7 log.go:172] (0xc002cb5080) (0xc000b9d900) Stream added, broadcasting: 3 I0327 00:08:37.210630 7 log.go:172] (0xc002cb5080) Reply frame received for 3 I0327 00:08:37.210662 7 log.go:172] (0xc002cb5080) (0xc00117c6e0) Create stream I0327 00:08:37.210675 7 log.go:172] (0xc002cb5080) (0xc00117c6e0) Stream added, broadcasting: 5 I0327 00:08:37.211717 7 log.go:172] (0xc002cb5080) Reply frame received for 5 I0327 00:08:37.275138 7 log.go:172] (0xc002cb5080) Data frame received for 5 I0327 00:08:37.275202 7 log.go:172] (0xc00117c6e0) (5) Data frame handling I0327 00:08:37.275245 7 log.go:172] (0xc002cb5080) Data frame received for 3 I0327 00:08:37.275266 7 log.go:172] (0xc000b9d900) (3) Data frame handling I0327 00:08:37.275290 7 log.go:172] (0xc000b9d900) (3) Data frame sent I0327 00:08:37.275369 7 log.go:172] (0xc002cb5080) Data frame received for 3 I0327 00:08:37.275400 7 log.go:172] (0xc000b9d900) (3) Data frame handling I0327 00:08:37.276928 7 log.go:172] (0xc002cb5080) Data frame received for 1 I0327 00:08:37.276960 7 log.go:172] (0xc000b9d5e0) (1) Data frame handling I0327 00:08:37.276983 7 log.go:172] (0xc000b9d5e0) (1) Data frame sent I0327 00:08:37.277011 7 log.go:172] (0xc002cb5080) (0xc000b9d5e0) Stream removed, broadcasting: 1 I0327 00:08:37.277042 7 log.go:172] (0xc002cb5080) Go away received I0327 00:08:37.277313 7 log.go:172] (0xc002cb5080) (0xc000b9d5e0) Stream removed, broadcasting: 1 I0327 00:08:37.277346 7 log.go:172] (0xc002cb5080) (0xc000b9d900) Stream removed, broadcasting: 3 I0327 00:08:37.277371 7 log.go:172] (0xc002cb5080) (0xc00117c6e0) Stream removed, broadcasting: 5 Mar 27 00:08:37.277: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:08:37.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4054" for this suite. • [SLOW TEST:11.111 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1322,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:08:37.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-6da871c1-8976-4b98-a9c8-289d827cb120 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:08:41.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8734" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1328,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:08:41.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-1452/configmap-test-218d5da3-3542-4592-880c-5450370ef5c5 STEP: Creating a pod to test consume configMaps Mar 27 00:08:41.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-80af5db9-56be-4f40-9601-ad4ebb570cd9" in namespace "configmap-1452" to be "Succeeded or Failed" Mar 27 00:08:41.526: INFO: Pod "pod-configmaps-80af5db9-56be-4f40-9601-ad4ebb570cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623249ms Mar 27 00:08:43.529: INFO: Pod "pod-configmaps-80af5db9-56be-4f40-9601-ad4ebb570cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009417945s Mar 27 00:08:45.533: INFO: Pod "pod-configmaps-80af5db9-56be-4f40-9601-ad4ebb570cd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01358561s STEP: Saw pod success Mar 27 00:08:45.533: INFO: Pod "pod-configmaps-80af5db9-56be-4f40-9601-ad4ebb570cd9" satisfied condition "Succeeded or Failed" Mar 27 00:08:45.537: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-80af5db9-56be-4f40-9601-ad4ebb570cd9 container env-test: STEP: delete the pod Mar 27 00:08:45.595: INFO: Waiting for pod pod-configmaps-80af5db9-56be-4f40-9601-ad4ebb570cd9 to disappear Mar 27 00:08:45.599: INFO: Pod pod-configmaps-80af5db9-56be-4f40-9601-ad4ebb570cd9 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:08:45.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1452" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:08:45.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-f6d30922-3e25-4603-88e8-6f4ad57f3098 STEP: Creating a pod to test consume configMaps Mar 27 00:08:45.709: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5beaf95e-5e85-4724-91a2-a4242f6873f0" in namespace "projected-5404" to be "Succeeded or Failed" Mar 27 00:08:45.711: INFO: Pod "pod-projected-configmaps-5beaf95e-5e85-4724-91a2-a4242f6873f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248141ms Mar 27 00:08:47.715: INFO: Pod "pod-projected-configmaps-5beaf95e-5e85-4724-91a2-a4242f6873f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00604105s Mar 27 00:08:49.719: INFO: Pod "pod-projected-configmaps-5beaf95e-5e85-4724-91a2-a4242f6873f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009312341s STEP: Saw pod success Mar 27 00:08:49.719: INFO: Pod "pod-projected-configmaps-5beaf95e-5e85-4724-91a2-a4242f6873f0" satisfied condition "Succeeded or Failed" Mar 27 00:08:49.721: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-5beaf95e-5e85-4724-91a2-a4242f6873f0 container projected-configmap-volume-test: STEP: delete the pod Mar 27 00:08:49.743: INFO: Waiting for pod pod-projected-configmaps-5beaf95e-5e85-4724-91a2-a4242f6873f0 to disappear Mar 27 00:08:49.764: INFO: Pod pod-projected-configmaps-5beaf95e-5e85-4724-91a2-a4242f6873f0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:08:49.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5404" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:08:49.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 27 00:08:49.832: INFO: Waiting up to 5m0s for pod "pod-575b9ae6-49f9-4f9e-82f0-15889c35740b" in namespace "emptydir-9740" to be "Succeeded or Failed" Mar 27 00:08:49.843: INFO: Pod "pod-575b9ae6-49f9-4f9e-82f0-15889c35740b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.130585ms Mar 27 00:08:51.919: INFO: Pod "pod-575b9ae6-49f9-4f9e-82f0-15889c35740b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086633123s Mar 27 00:08:53.923: INFO: Pod "pod-575b9ae6-49f9-4f9e-82f0-15889c35740b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090912785s STEP: Saw pod success Mar 27 00:08:53.923: INFO: Pod "pod-575b9ae6-49f9-4f9e-82f0-15889c35740b" satisfied condition "Succeeded or Failed" Mar 27 00:08:53.926: INFO: Trying to get logs from node latest-worker2 pod pod-575b9ae6-49f9-4f9e-82f0-15889c35740b container test-container: STEP: delete the pod Mar 27 00:08:54.112: INFO: Waiting for pod pod-575b9ae6-49f9-4f9e-82f0-15889c35740b to disappear Mar 27 00:08:54.118: INFO: Pod pod-575b9ae6-49f9-4f9e-82f0-15889c35740b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:08:54.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9740" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1404,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:08:54.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-9c3a0d0f-6c08-46a9-8730-72b1cf891f13 STEP: Creating secret with name s-test-opt-upd-56d65ca0-249c-4bda-8ffc-7b259fd5b200 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9c3a0d0f-6c08-46a9-8730-72b1cf891f13 STEP: Updating secret s-test-opt-upd-56d65ca0-249c-4bda-8ffc-7b259fd5b200 STEP: Creating secret with name s-test-opt-create-8c3a490c-a1cc-4da4-9595-5fce0a822f53 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:09:04.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1686" for this suite. • [SLOW TEST:10.251 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:09:04.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:09:15.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-552" for this suite. • [SLOW TEST:11.515 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":93,"skipped":1499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:09:15.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:09:15.934: INFO: Creating deployment "test-recreate-deployment" Mar 27 00:09:15.952: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 27 00:09:15.987: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 27 00:09:17.994: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 27 00:09:17.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864555, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864555, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864556, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864555, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 27 00:09:20.001: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 27 00:09:20.008: INFO: Updating deployment test-recreate-deployment Mar 27 00:09:20.008: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 27 00:09:20.468: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1726 /apis/apps/v1/namespaces/deployment-1726/deployments/test-recreate-deployment 74079a38-fbd9-4ce6-aaa7-f139400f49fe 3073840 2 2020-03-27 00:09:15 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048b0f88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-27 00:09:20 +0000 UTC,LastTransitionTime:2020-03-27 00:09:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-27 00:09:20 +0000 UTC,LastTransitionTime:2020-03-27 00:09:15 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 27 00:09:20.472: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1726 /apis/apps/v1/namespaces/deployment-1726/replicasets/test-recreate-deployment-5f94c574ff 8402da8c-c8a6-4af4-a235-347af7db9c31 3073837 1 2020-03-27 00:09:20 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 74079a38-fbd9-4ce6-aaa7-f139400f49fe 0xc0048b1397 0xc0048b1398}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048b13f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 27 00:09:20.472: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 27 00:09:20.472: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-1726 /apis/apps/v1/namespaces/deployment-1726/replicasets/test-recreate-deployment-846c7dd955 5f8fd157-8f7f-4950-b0db-38740f94bd87 3073829 2 2020-03-27 00:09:15 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 74079a38-fbd9-4ce6-aaa7-f139400f49fe 0xc0048b1467 0xc0048b1468}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048b14d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 27 00:09:20.486: INFO: Pod "test-recreate-deployment-5f94c574ff-f24xr" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-f24xr test-recreate-deployment-5f94c574ff- deployment-1726 /api/v1/namespaces/deployment-1726/pods/test-recreate-deployment-5f94c574ff-f24xr 52272652-8e8a-4cb9-8f97-ca6c0d3f2aac 3073841 0 2020-03-27 00:09:20 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 8402da8c-c8a6-4af4-a235-347af7db9c31 0xc00485d967 0xc00485d968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lnx8p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lnx8p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lnx8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:09:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:09:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:09:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:09:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-27 00:09:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:09:20.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1726" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":94,"skipped":1564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:09:20.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 27 00:09:20.691: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:09:33.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3060" for this suite. • [SLOW TEST:12.361 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1591,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:09:33.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-4fe15f31-428f-4f8b-bf33-71fc9ce4fb00 STEP: Creating a pod to test consume configMaps Mar 27 00:09:33.098: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d9ab9d4f-1c9e-4766-8466-25f2ccefa0be" in namespace "projected-7377" to be "Succeeded or Failed" Mar 27 00:09:33.107: INFO: Pod "pod-projected-configmaps-d9ab9d4f-1c9e-4766-8466-25f2ccefa0be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379162ms Mar 27 00:09:35.111: INFO: Pod "pod-projected-configmaps-d9ab9d4f-1c9e-4766-8466-25f2ccefa0be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012374431s Mar 27 00:09:37.115: INFO: Pod "pod-projected-configmaps-d9ab9d4f-1c9e-4766-8466-25f2ccefa0be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016923956s STEP: Saw pod success Mar 27 00:09:37.115: INFO: Pod "pod-projected-configmaps-d9ab9d4f-1c9e-4766-8466-25f2ccefa0be" satisfied condition "Succeeded or Failed" Mar 27 00:09:37.119: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-d9ab9d4f-1c9e-4766-8466-25f2ccefa0be container projected-configmap-volume-test: STEP: delete the pod Mar 27 00:09:37.153: INFO: Waiting for pod pod-projected-configmaps-d9ab9d4f-1c9e-4766-8466-25f2ccefa0be to disappear Mar 27 00:09:37.167: INFO: Pod pod-projected-configmaps-d9ab9d4f-1c9e-4766-8466-25f2ccefa0be no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:09:37.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7377" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1608,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:09:37.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:09:37.245: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 27 00:09:39.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1151 create -f -' Mar 27 00:09:42.096: INFO: stderr: "" Mar 27 00:09:42.096: INFO: stdout: "e2e-test-crd-publish-openapi-3442-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 27 00:09:42.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1151 delete e2e-test-crd-publish-openapi-3442-crds test-cr' Mar 27 00:09:42.203: INFO: stderr: "" Mar 27 00:09:42.203: INFO: stdout: "e2e-test-crd-publish-openapi-3442-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 27 00:09:42.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1151 apply -f -' Mar 27 00:09:42.477: INFO: stderr: "" Mar 27 00:09:42.477: INFO: stdout: "e2e-test-crd-publish-openapi-3442-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 27 00:09:42.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1151 delete e2e-test-crd-publish-openapi-3442-crds test-cr' Mar 27 00:09:42.605: INFO: stderr: "" Mar 27 00:09:42.605: INFO: stdout: "e2e-test-crd-publish-openapi-3442-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 27 00:09:42.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3442-crds' Mar 27 00:09:42.866: INFO: stderr: "" Mar 27 00:09:42.866: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3442-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:09:44.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1151" for this suite. • [SLOW TEST:7.621 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":97,"skipped":1619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:09:44.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 27 00:09:44.939: INFO: PodSpec: initContainers in spec.initContainers Mar 27 00:10:30.590: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ba057905-d088-4c4e-845a-789c0cba31bb", GenerateName:"", Namespace:"init-container-4613", SelfLink:"/api/v1/namespaces/init-container-4613/pods/pod-init-ba057905-d088-4c4e-845a-789c0cba31bb", UID:"db510df9-d3be-454b-becd-7aa778db78c9", ResourceVersion:"3074192", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720864584, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"939199239"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jwxhs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004be0f00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jwxhs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jwxhs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jwxhs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001fffde8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00242d9d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fffe70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fffe90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001fffe98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001fffe9c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864585, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864585, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864585, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864584, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.2.96", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.96"}}, StartTime:(*v1.Time)(0xc0036cb640), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00242db20)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00242db90)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d78048cadcccb9ee1cb93e982ce93eaca7c23a536c1769bbbb74bce6c804152c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036cb6a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036cb660), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc001ffff2f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:10:30.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4613" for this suite. • [SLOW TEST:45.831 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":98,"skipped":1662,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:10:30.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 27 00:10:30.694: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:10:47.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2971" for this suite. • [SLOW TEST:16.876 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":99,"skipped":1675,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:10:47.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:10:48.210: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:10:50.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864648, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864648, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864648, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864648, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:10:53.250: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:10:53.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3077-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:10:54.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8167" for this suite. STEP: Destroying namespace "webhook-8167-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.998 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":100,"skipped":1680,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:10:54.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 27 00:10:54.569: INFO: Waiting up to 5m0s for pod "pod-62f25c2c-1b8d-468a-8343-e79f0961db33" in namespace "emptydir-2972" to be "Succeeded or Failed" Mar 27 00:10:54.586: INFO: Pod "pod-62f25c2c-1b8d-468a-8343-e79f0961db33": Phase="Pending", Reason="", readiness=false. Elapsed: 16.954063ms Mar 27 00:10:56.621: INFO: Pod "pod-62f25c2c-1b8d-468a-8343-e79f0961db33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052136629s Mar 27 00:10:58.626: INFO: Pod "pod-62f25c2c-1b8d-468a-8343-e79f0961db33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056732653s STEP: Saw pod success Mar 27 00:10:58.626: INFO: Pod "pod-62f25c2c-1b8d-468a-8343-e79f0961db33" satisfied condition "Succeeded or Failed" Mar 27 00:10:58.629: INFO: Trying to get logs from node latest-worker2 pod pod-62f25c2c-1b8d-468a-8343-e79f0961db33 container test-container: STEP: delete the pod Mar 27 00:10:58.649: INFO: Waiting for pod pod-62f25c2c-1b8d-468a-8343-e79f0961db33 to disappear Mar 27 00:10:58.665: INFO: Pod pod-62f25c2c-1b8d-468a-8343-e79f0961db33 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:10:58.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2972" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1691,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:10:58.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 27 00:11:06.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 27 00:11:06.848: INFO: Pod pod-with-poststart-http-hook still exists Mar 27 00:11:08.848: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 27 00:11:08.853: INFO: Pod pod-with-poststart-http-hook still exists Mar 27 00:11:10.848: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 27 00:11:10.853: INFO: Pod pod-with-poststart-http-hook still exists Mar 27 00:11:12.848: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 27 00:11:12.853: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:11:12.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-124" for this suite. • [SLOW TEST:14.190 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:11:12.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:11:13.431: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:11:15.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864673, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864673, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864673, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864673, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:11:18.468: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:11:18.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8617" for this suite. STEP: Destroying namespace "webhook-8617-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.989 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":103,"skipped":1733,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:11:18.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 27 00:11:18.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8214' Mar 27 00:11:19.446: INFO: stderr: "" Mar 27 00:11:19.446: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 27 00:11:19.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8214' Mar 27 00:11:19.761: INFO: stderr: "" Mar 27 00:11:19.761: INFO: stdout: "update-demo-nautilus-2mjh2 update-demo-nautilus-kxfr2 " Mar 27 00:11:19.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mjh2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8214' Mar 27 00:11:19.883: INFO: stderr: "" Mar 27 00:11:19.883: INFO: stdout: "" Mar 27 00:11:19.883: INFO: update-demo-nautilus-2mjh2 is created but not running Mar 27 00:11:24.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8214' Mar 27 00:11:24.982: INFO: stderr: "" Mar 27 00:11:24.982: INFO: stdout: "update-demo-nautilus-2mjh2 update-demo-nautilus-kxfr2 " Mar 27 00:11:24.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mjh2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8214' Mar 27 00:11:25.073: INFO: stderr: "" Mar 27 00:11:25.073: INFO: stdout: "true" Mar 27 00:11:25.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mjh2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8214' Mar 27 00:11:25.167: INFO: stderr: "" Mar 27 00:11:25.167: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 27 00:11:25.167: INFO: validating pod update-demo-nautilus-2mjh2 Mar 27 00:11:25.172: INFO: got data: { "image": "nautilus.jpg" } Mar 27 00:11:25.172: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 27 00:11:25.172: INFO: update-demo-nautilus-2mjh2 is verified up and running Mar 27 00:11:25.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxfr2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8214' Mar 27 00:11:25.265: INFO: stderr: "" Mar 27 00:11:25.265: INFO: stdout: "true" Mar 27 00:11:25.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxfr2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8214' Mar 27 00:11:25.364: INFO: stderr: "" Mar 27 00:11:25.364: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 27 00:11:25.364: INFO: validating pod update-demo-nautilus-kxfr2 Mar 27 00:11:25.369: INFO: got data: { "image": "nautilus.jpg" } Mar 27 00:11:25.369: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 27 00:11:25.369: INFO: update-demo-nautilus-kxfr2 is verified up and running STEP: using delete to clean up resources Mar 27 00:11:25.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8214' Mar 27 00:11:25.500: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 27 00:11:25.500: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 27 00:11:25.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8214' Mar 27 00:11:25.895: INFO: stderr: "No resources found in kubectl-8214 namespace.\n" Mar 27 00:11:25.895: INFO: stdout: "" Mar 27 00:11:25.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8214 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 27 00:11:25.986: INFO: stderr: "" Mar 27 00:11:25.986: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:11:25.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8214" for this suite. • [SLOW TEST:7.149 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":104,"skipped":1786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:11:26.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-0d136f24-9903-4ba4-8c27-faf75b56075b STEP: Creating a pod to test consume secrets Mar 27 00:11:26.060: INFO: Waiting up to 5m0s for pod "pod-secrets-928cc47d-e312-4c01-99e4-627a7f862225" in namespace "secrets-1470" to be "Succeeded or Failed" Mar 27 00:11:26.263: INFO: Pod "pod-secrets-928cc47d-e312-4c01-99e4-627a7f862225": Phase="Pending", Reason="", readiness=false. Elapsed: 202.299008ms Mar 27 00:11:28.268: INFO: Pod "pod-secrets-928cc47d-e312-4c01-99e4-627a7f862225": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207840706s Mar 27 00:11:30.272: INFO: Pod "pod-secrets-928cc47d-e312-4c01-99e4-627a7f862225": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.212171937s STEP: Saw pod success Mar 27 00:11:30.273: INFO: Pod "pod-secrets-928cc47d-e312-4c01-99e4-627a7f862225" satisfied condition "Succeeded or Failed" Mar 27 00:11:30.276: INFO: Trying to get logs from node latest-worker pod pod-secrets-928cc47d-e312-4c01-99e4-627a7f862225 container secret-env-test: STEP: delete the pod Mar 27 00:11:30.311: INFO: Waiting for pod pod-secrets-928cc47d-e312-4c01-99e4-627a7f862225 to disappear Mar 27 00:11:30.340: INFO: Pod pod-secrets-928cc47d-e312-4c01-99e4-627a7f862225 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:11:30.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1470" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1810,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:11:30.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-7b9b STEP: Creating a pod to test atomic-volume-subpath Mar 27 00:11:30.449: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7b9b" in namespace "subpath-9528" to be "Succeeded or Failed" Mar 27 00:11:30.454: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.772678ms Mar 27 00:11:32.460: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010818019s Mar 27 00:11:34.464: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.014838875s Mar 27 00:11:36.467: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Running", Reason="", readiness=true. Elapsed: 6.018343562s Mar 27 00:11:38.472: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Running", Reason="", readiness=true. Elapsed: 8.022419484s Mar 27 00:11:40.476: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Running", Reason="", readiness=true. Elapsed: 10.026585626s Mar 27 00:11:42.480: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Running", Reason="", readiness=true. Elapsed: 12.030666456s Mar 27 00:11:44.484: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Running", Reason="", readiness=true. Elapsed: 14.034982733s Mar 27 00:11:46.488: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Running", Reason="", readiness=true. Elapsed: 16.039138921s Mar 27 00:11:48.492: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Running", Reason="", readiness=true. Elapsed: 18.043091894s Mar 27 00:11:50.499: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Running", Reason="", readiness=true. Elapsed: 20.050109903s Mar 27 00:11:52.503: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Running", Reason="", readiness=true. Elapsed: 22.053942104s Mar 27 00:11:54.521: INFO: Pod "pod-subpath-test-downwardapi-7b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.07138012s STEP: Saw pod success Mar 27 00:11:54.521: INFO: Pod "pod-subpath-test-downwardapi-7b9b" satisfied condition "Succeeded or Failed" Mar 27 00:11:54.523: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-7b9b container test-container-subpath-downwardapi-7b9b: STEP: delete the pod Mar 27 00:11:54.556: INFO: Waiting for pod pod-subpath-test-downwardapi-7b9b to disappear Mar 27 00:11:54.565: INFO: Pod pod-subpath-test-downwardapi-7b9b no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7b9b Mar 27 00:11:54.565: INFO: Deleting pod "pod-subpath-test-downwardapi-7b9b" in namespace "subpath-9528" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:11:54.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9528" for this suite. • [SLOW TEST:24.228 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":106,"skipped":1812,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:11:54.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-4685/secret-test-0f5e818b-ced4-4ee3-997a-c36c08725f23 STEP: Creating a pod to test consume secrets Mar 27 00:11:54.670: INFO: Waiting up to 5m0s for pod "pod-configmaps-8502b45f-a7e7-41ec-b7c6-bc2115c42915" in namespace "secrets-4685" to be "Succeeded or Failed" Mar 27 00:11:54.679: INFO: Pod "pod-configmaps-8502b45f-a7e7-41ec-b7c6-bc2115c42915": Phase="Pending", Reason="", readiness=false. Elapsed: 8.942184ms Mar 27 00:11:56.682: INFO: Pod "pod-configmaps-8502b45f-a7e7-41ec-b7c6-bc2115c42915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011855082s Mar 27 00:11:58.685: INFO: Pod "pod-configmaps-8502b45f-a7e7-41ec-b7c6-bc2115c42915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01546255s STEP: Saw pod success Mar 27 00:11:58.685: INFO: Pod "pod-configmaps-8502b45f-a7e7-41ec-b7c6-bc2115c42915" satisfied condition "Succeeded or Failed" Mar 27 00:11:58.688: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8502b45f-a7e7-41ec-b7c6-bc2115c42915 container env-test: STEP: delete the pod Mar 27 00:11:58.707: INFO: Waiting for pod pod-configmaps-8502b45f-a7e7-41ec-b7c6-bc2115c42915 to disappear Mar 27 00:11:58.717: INFO: Pod pod-configmaps-8502b45f-a7e7-41ec-b7c6-bc2115c42915 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:11:58.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4685" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1815,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:11:58.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-4d796fcc-915d-48eb-9d04-6c6288acd586 STEP: Creating a pod to test consume secrets Mar 27 00:11:58.850: INFO: Waiting up to 5m0s for pod "pod-secrets-917a9ddb-bde3-4a80-9687-7cd92b00e28e" in namespace "secrets-2060" to be "Succeeded or Failed" Mar 27 00:11:58.868: INFO: Pod "pod-secrets-917a9ddb-bde3-4a80-9687-7cd92b00e28e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.158241ms Mar 27 00:12:00.898: INFO: Pod "pod-secrets-917a9ddb-bde3-4a80-9687-7cd92b00e28e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048501151s Mar 27 00:12:02.902: INFO: Pod "pod-secrets-917a9ddb-bde3-4a80-9687-7cd92b00e28e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052486932s STEP: Saw pod success Mar 27 00:12:02.902: INFO: Pod "pod-secrets-917a9ddb-bde3-4a80-9687-7cd92b00e28e" satisfied condition "Succeeded or Failed" Mar 27 00:12:02.905: INFO: Trying to get logs from node latest-worker pod pod-secrets-917a9ddb-bde3-4a80-9687-7cd92b00e28e container secret-volume-test: STEP: delete the pod Mar 27 00:12:02.923: INFO: Waiting for pod pod-secrets-917a9ddb-bde3-4a80-9687-7cd92b00e28e to disappear Mar 27 00:12:02.927: INFO: Pod pod-secrets-917a9ddb-bde3-4a80-9687-7cd92b00e28e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:12:02.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2060" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:12:02.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:12:03.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c4f013c-3071-4716-8257-e289bd060544" in namespace "downward-api-6079" to be "Succeeded or Failed" Mar 27 00:12:03.011: INFO: Pod "downwardapi-volume-5c4f013c-3071-4716-8257-e289bd060544": Phase="Pending", Reason="", readiness=false. Elapsed: 4.692906ms Mar 27 00:12:05.017: INFO: Pod "downwardapi-volume-5c4f013c-3071-4716-8257-e289bd060544": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0105218s Mar 27 00:12:07.021: INFO: Pod "downwardapi-volume-5c4f013c-3071-4716-8257-e289bd060544": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014543724s STEP: Saw pod success Mar 27 00:12:07.021: INFO: Pod "downwardapi-volume-5c4f013c-3071-4716-8257-e289bd060544" satisfied condition "Succeeded or Failed" Mar 27 00:12:07.024: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5c4f013c-3071-4716-8257-e289bd060544 container client-container: STEP: delete the pod Mar 27 00:12:07.040: INFO: Waiting for pod downwardapi-volume-5c4f013c-3071-4716-8257-e289bd060544 to disappear Mar 27 00:12:07.044: INFO: Pod downwardapi-volume-5c4f013c-3071-4716-8257-e289bd060544 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:12:07.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6079" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:12:07.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:12:07.763: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:12:09.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864727, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864727, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864727, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864727, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:12:12.788: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:12:12.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5053" for this suite. STEP: Destroying namespace "webhook-5053-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.016 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":110,"skipped":1901,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:12:13.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-bd6d152b-3124-4513-b6ca-9e7678c4deca STEP: Creating a pod to test consume configMaps Mar 27 00:12:13.127: INFO: Waiting up to 5m0s for pod "pod-configmaps-6343c875-3aa2-4660-983b-cae5ebe1a6e9" in namespace "configmap-8124" to be "Succeeded or Failed" Mar 27 00:12:13.137: INFO: Pod "pod-configmaps-6343c875-3aa2-4660-983b-cae5ebe1a6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.324832ms Mar 27 00:12:15.140: INFO: Pod "pod-configmaps-6343c875-3aa2-4660-983b-cae5ebe1a6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013336184s Mar 27 00:12:17.145: INFO: Pod "pod-configmaps-6343c875-3aa2-4660-983b-cae5ebe1a6e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017636378s STEP: Saw pod success Mar 27 00:12:17.145: INFO: Pod "pod-configmaps-6343c875-3aa2-4660-983b-cae5ebe1a6e9" satisfied condition "Succeeded or Failed" Mar 27 00:12:17.148: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-6343c875-3aa2-4660-983b-cae5ebe1a6e9 container configmap-volume-test: STEP: delete the pod Mar 27 00:12:17.187: INFO: Waiting for pod pod-configmaps-6343c875-3aa2-4660-983b-cae5ebe1a6e9 to disappear Mar 27 00:12:17.239: INFO: Pod pod-configmaps-6343c875-3aa2-4660-983b-cae5ebe1a6e9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:12:17.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8124" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1908,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:12:17.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 27 00:12:25.403: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 27 00:12:25.424: INFO: Pod pod-with-poststart-exec-hook still exists Mar 27 00:12:27.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 27 00:12:27.429: INFO: Pod pod-with-poststart-exec-hook still exists Mar 27 00:12:29.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 27 00:12:29.429: INFO: Pod pod-with-poststart-exec-hook still exists Mar 27 00:12:31.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 27 00:12:31.429: INFO: Pod pod-with-poststart-exec-hook still exists Mar 27 00:12:33.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 27 00:12:33.428: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:12:33.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1821" for this suite. • [SLOW TEST:16.187 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:12:33.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-8769 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8769 to expose endpoints map[] Mar 27 00:12:33.539: INFO: Get endpoints failed (5.025877ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 27 00:12:34.543: INFO: successfully validated that service multi-endpoint-test in namespace services-8769 exposes endpoints map[] (1.008942142s elapsed) STEP: Creating pod pod1 in namespace services-8769 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8769 to expose endpoints map[pod1:[100]] Mar 27 00:12:37.583: INFO: successfully validated that service multi-endpoint-test in namespace services-8769 exposes endpoints map[pod1:[100]] (3.031635017s elapsed) STEP: Creating pod pod2 in namespace services-8769 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8769 to expose endpoints map[pod1:[100] pod2:[101]] Mar 27 00:12:40.836: INFO: successfully validated that service multi-endpoint-test in namespace services-8769 exposes endpoints map[pod1:[100] pod2:[101]] (3.248647071s elapsed) STEP: Deleting pod pod1 in namespace services-8769 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8769 to expose endpoints map[pod2:[101]] Mar 27 00:12:40.929: INFO: successfully validated that service multi-endpoint-test in namespace services-8769 exposes endpoints map[pod2:[101]] (88.512352ms elapsed) STEP: Deleting pod pod2 in namespace services-8769 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8769 to expose endpoints map[] Mar 27 00:12:41.192: INFO: successfully validated that service multi-endpoint-test in namespace services-8769 exposes endpoints map[] (258.527256ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:12:41.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8769" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:7.919 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":113,"skipped":1945,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:12:41.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:12:45.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9909" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1949,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:12:45.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:12:45.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12fa0c49-69e2-4264-beee-3a4fe42c1e30" in namespace "downward-api-6457" to be "Succeeded or Failed" Mar 27 00:12:45.549: INFO: Pod "downwardapi-volume-12fa0c49-69e2-4264-beee-3a4fe42c1e30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103184ms Mar 27 00:12:47.561: INFO: Pod "downwardapi-volume-12fa0c49-69e2-4264-beee-3a4fe42c1e30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02024976s Mar 27 00:12:49.565: INFO: Pod "downwardapi-volume-12fa0c49-69e2-4264-beee-3a4fe42c1e30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024420602s STEP: Saw pod success Mar 27 00:12:49.565: INFO: Pod "downwardapi-volume-12fa0c49-69e2-4264-beee-3a4fe42c1e30" satisfied condition "Succeeded or Failed" Mar 27 00:12:49.568: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-12fa0c49-69e2-4264-beee-3a4fe42c1e30 container client-container: STEP: delete the pod Mar 27 00:12:49.673: INFO: Waiting for pod downwardapi-volume-12fa0c49-69e2-4264-beee-3a4fe42c1e30 to disappear Mar 27 00:12:49.693: INFO: Pod downwardapi-volume-12fa0c49-69e2-4264-beee-3a4fe42c1e30 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:12:49.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6457" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":2026,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:12:49.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:13:00.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4172" for this suite. • [SLOW TEST:11.132 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":116,"skipped":2056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:13:00.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:13:01.515: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:13:03.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864781, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864781, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864781, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864781, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:13:06.612: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:13:06.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6920" for this suite. STEP: Destroying namespace "webhook-6920-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.990 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":117,"skipped":2092,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:13:06.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-b83c3fc5-fcd2-4e33-b5b5-4a50d047eef5 STEP: Creating a pod to test consume configMaps Mar 27 00:13:06.929: INFO: Waiting up to 5m0s for pod "pod-configmaps-9c4a7bd0-118a-4306-b858-7fdcdb9b4c66" in namespace "configmap-6470" to be "Succeeded or Failed" Mar 27 00:13:06.938: INFO: Pod "pod-configmaps-9c4a7bd0-118a-4306-b858-7fdcdb9b4c66": Phase="Pending", Reason="", readiness=false. Elapsed: 8.44962ms Mar 27 00:13:08.942: INFO: Pod "pod-configmaps-9c4a7bd0-118a-4306-b858-7fdcdb9b4c66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012157284s Mar 27 00:13:10.946: INFO: Pod "pod-configmaps-9c4a7bd0-118a-4306-b858-7fdcdb9b4c66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016792428s STEP: Saw pod success Mar 27 00:13:10.946: INFO: Pod "pod-configmaps-9c4a7bd0-118a-4306-b858-7fdcdb9b4c66" satisfied condition "Succeeded or Failed" Mar 27 00:13:10.950: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-9c4a7bd0-118a-4306-b858-7fdcdb9b4c66 container configmap-volume-test: STEP: delete the pod Mar 27 00:13:10.984: INFO: Waiting for pod pod-configmaps-9c4a7bd0-118a-4306-b858-7fdcdb9b4c66 to disappear Mar 27 00:13:10.998: INFO: Pod pod-configmaps-9c4a7bd0-118a-4306-b858-7fdcdb9b4c66 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:13:10.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6470" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":2100,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:13:11.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4178 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-4178 Mar 27 00:13:11.091: INFO: Found 0 stateful pods, waiting for 1 Mar 27 00:13:21.096: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 27 00:13:21.115: INFO: Deleting all statefulset in ns statefulset-4178 Mar 27 00:13:21.122: INFO: Scaling statefulset ss to 0 Mar 27 00:13:51.186: INFO: Waiting for statefulset status.replicas updated to 0 Mar 27 00:13:51.189: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:13:51.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4178" for this suite. • [SLOW TEST:40.251 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":119,"skipped":2109,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:13:51.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:13:51.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-702d8383-3b4e-4b83-aa70-a63359ff3521" in namespace "projected-8279" to be "Succeeded or Failed" Mar 27 00:13:51.326: INFO: Pod "downwardapi-volume-702d8383-3b4e-4b83-aa70-a63359ff3521": Phase="Pending", Reason="", readiness=false. Elapsed: 11.13195ms Mar 27 00:13:53.330: INFO: Pod "downwardapi-volume-702d8383-3b4e-4b83-aa70-a63359ff3521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014821864s Mar 27 00:13:55.334: INFO: Pod "downwardapi-volume-702d8383-3b4e-4b83-aa70-a63359ff3521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018647233s STEP: Saw pod success Mar 27 00:13:55.334: INFO: Pod "downwardapi-volume-702d8383-3b4e-4b83-aa70-a63359ff3521" satisfied condition "Succeeded or Failed" Mar 27 00:13:55.336: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-702d8383-3b4e-4b83-aa70-a63359ff3521 container client-container: STEP: delete the pod Mar 27 00:13:55.358: INFO: Waiting for pod downwardapi-volume-702d8383-3b4e-4b83-aa70-a63359ff3521 to disappear Mar 27 00:13:55.376: INFO: Pod downwardapi-volume-702d8383-3b4e-4b83-aa70-a63359ff3521 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:13:55.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8279" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":2128,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:13:55.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-858bf631-4d76-40f8-bb63-2cbe0741e0d2 STEP: Creating a pod to test consume secrets Mar 27 00:13:55.484: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fa500a0d-ba21-40ad-b96a-c108e5642f69" in namespace "projected-3018" to be "Succeeded or Failed" Mar 27 00:13:55.488: INFO: Pod "pod-projected-secrets-fa500a0d-ba21-40ad-b96a-c108e5642f69": Phase="Pending", Reason="", readiness=false. Elapsed: 3.835175ms Mar 27 00:13:57.492: INFO: Pod "pod-projected-secrets-fa500a0d-ba21-40ad-b96a-c108e5642f69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008010385s Mar 27 00:13:59.496: INFO: Pod "pod-projected-secrets-fa500a0d-ba21-40ad-b96a-c108e5642f69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012446269s STEP: Saw pod success Mar 27 00:13:59.496: INFO: Pod "pod-projected-secrets-fa500a0d-ba21-40ad-b96a-c108e5642f69" satisfied condition "Succeeded or Failed" Mar 27 00:13:59.500: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-fa500a0d-ba21-40ad-b96a-c108e5642f69 container projected-secret-volume-test: STEP: delete the pod Mar 27 00:13:59.519: INFO: Waiting for pod pod-projected-secrets-fa500a0d-ba21-40ad-b96a-c108e5642f69 to disappear Mar 27 00:13:59.524: INFO: Pod pod-projected-secrets-fa500a0d-ba21-40ad-b96a-c108e5642f69 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:13:59.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3018" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2163,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:13:59.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:13:59.597: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 27 00:14:02.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6844 create -f -' Mar 27 00:14:05.636: INFO: stderr: "" Mar 27 00:14:05.637: INFO: stdout: "e2e-test-crd-publish-openapi-6053-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 27 00:14:05.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6844 delete e2e-test-crd-publish-openapi-6053-crds test-cr' Mar 27 00:14:05.726: INFO: stderr: "" Mar 27 00:14:05.726: INFO: stdout: "e2e-test-crd-publish-openapi-6053-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 27 00:14:05.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6844 apply -f -' Mar 27 00:14:05.962: INFO: stderr: "" Mar 27 00:14:05.963: INFO: stdout: "e2e-test-crd-publish-openapi-6053-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 27 00:14:05.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6844 delete e2e-test-crd-publish-openapi-6053-crds test-cr' Mar 27 00:14:06.068: INFO: stderr: "" Mar 27 00:14:06.068: INFO: stdout: "e2e-test-crd-publish-openapi-6053-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 27 00:14:06.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6053-crds' Mar 27 00:14:06.310: INFO: stderr: "" Mar 27 00:14:06.310: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6053-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:14:09.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6844" for this suite. • [SLOW TEST:9.706 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":122,"skipped":2166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:14:09.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:14:09.706: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:14:11.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864849, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864849, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864849, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720864849, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:14:14.752: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:14:15.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9520" for this suite. STEP: Destroying namespace "webhook-9520-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.006 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":123,"skipped":2191,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:14:15.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7134, will wait for the garbage collector to delete the pods Mar 27 00:14:19.414: INFO: Deleting Job.batch foo took: 6.711606ms Mar 27 00:14:19.714: INFO: Terminating Job.batch foo pods took: 300.255304ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:15:03.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7134" for this suite. • [SLOW TEST:47.800 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":124,"skipped":2201,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:15:03.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 27 00:15:03.097: INFO: Waiting up to 5m0s for pod "downward-api-19de34dc-52ea-4ee2-a24e-c7a08c293257" in namespace "downward-api-7847" to be "Succeeded or Failed" Mar 27 00:15:03.100: INFO: Pod "downward-api-19de34dc-52ea-4ee2-a24e-c7a08c293257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.879985ms Mar 27 00:15:05.104: INFO: Pod "downward-api-19de34dc-52ea-4ee2-a24e-c7a08c293257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007051295s Mar 27 00:15:07.109: INFO: Pod "downward-api-19de34dc-52ea-4ee2-a24e-c7a08c293257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011613986s STEP: Saw pod success Mar 27 00:15:07.109: INFO: Pod "downward-api-19de34dc-52ea-4ee2-a24e-c7a08c293257" satisfied condition "Succeeded or Failed" Mar 27 00:15:07.112: INFO: Trying to get logs from node latest-worker pod downward-api-19de34dc-52ea-4ee2-a24e-c7a08c293257 container dapi-container: STEP: delete the pod Mar 27 00:15:07.171: INFO: Waiting for pod downward-api-19de34dc-52ea-4ee2-a24e-c7a08c293257 to disappear Mar 27 00:15:07.174: INFO: Pod downward-api-19de34dc-52ea-4ee2-a24e-c7a08c293257 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:15:07.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7847" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:15:07.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 27 00:15:07.225: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 27 00:15:07.249: INFO: Waiting for terminating namespaces to be deleted... Mar 27 00:15:07.252: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 27 00:15:07.258: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:15:07.258: INFO: Container kindnet-cni ready: true, restart count 0 Mar 27 00:15:07.258: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:15:07.258: INFO: Container kube-proxy ready: true, restart count 0 Mar 27 00:15:07.258: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 27 00:15:07.275: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:15:07.275: INFO: Container kube-proxy ready: true, restart count 0 Mar 27 00:15:07.275: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:15:07.275: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160000f1c98edc98], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:15:08.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8860" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":126,"skipped":2233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:15:08.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-0df706af-5deb-47cd-b32d-442b8dd873a8 STEP: Creating configMap with name cm-test-opt-upd-037831c4-4634-4f5d-b33b-3a07c575c8be STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0df706af-5deb-47cd-b32d-442b8dd873a8 STEP: Updating configmap cm-test-opt-upd-037831c4-4634-4f5d-b33b-3a07c575c8be STEP: Creating configMap with name cm-test-opt-create-4fa0410f-9438-430d-8adf-e95fe5896ddb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:16:24.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2816" for this suite. • [SLOW TEST:76.526 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:16:24.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:16:41.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1323" for this suite. • [SLOW TEST:17.100 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":128,"skipped":2314,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:16:41.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Mar 27 00:16:42.010: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix949104191/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:16:42.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-544" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":129,"skipped":2334,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:16:42.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Mar 27 00:16:42.179: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:16:42.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9151" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":130,"skipped":2337,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:16:42.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 27 00:16:42.323: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:16:48.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1793" for this suite. • [SLOW TEST:6.012 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":131,"skipped":2339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:16:48.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-7e600c1d-ebaa-4d90-af79-1dc69787049a [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:16:48.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8184" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":132,"skipped":2365,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:16:48.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Mar 27 00:16:48.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2949' Mar 27 00:16:48.821: INFO: stderr: "" Mar 27 00:16:48.821: INFO: stdout: "pod/pause created\n" Mar 27 00:16:48.821: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 27 00:16:48.821: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2949" to be "running and ready" Mar 27 00:16:48.846: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 25.30264ms Mar 27 00:16:50.850: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028737521s Mar 27 00:16:52.854: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.032879599s Mar 27 00:16:52.854: INFO: Pod "pause" satisfied condition "running and ready" Mar 27 00:16:52.854: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Mar 27 00:16:52.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2949' Mar 27 00:16:52.955: INFO: stderr: "" Mar 27 00:16:52.955: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 27 00:16:52.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2949' Mar 27 00:16:53.050: INFO: stderr: "" Mar 27 00:16:53.050: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 27 00:16:53.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2949' Mar 27 00:16:53.149: INFO: stderr: "" Mar 27 00:16:53.149: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 27 00:16:53.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2949' Mar 27 00:16:53.254: INFO: stderr: "" Mar 27 00:16:53.254: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Mar 27 00:16:53.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2949' Mar 27 00:16:53.411: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 27 00:16:53.411: INFO: stdout: "pod \"pause\" force deleted\n" Mar 27 00:16:53.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2949' Mar 27 00:16:53.518: INFO: stderr: "No resources found in kubectl-2949 namespace.\n" Mar 27 00:16:53.518: INFO: stdout: "" Mar 27 00:16:53.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2949 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 27 00:16:53.603: INFO: stderr: "" Mar 27 00:16:53.603: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:16:53.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2949" for this suite. • [SLOW TEST:5.362 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":133,"skipped":2370,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:16:53.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 27 00:16:53.974: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:16:53.977: INFO: Number of nodes with available pods: 0 Mar 27 00:16:53.977: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:16:55.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:16:55.055: INFO: Number of nodes with available pods: 0 Mar 27 00:16:55.055: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:16:56.082: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:16:56.085: INFO: Number of nodes with available pods: 0 Mar 27 00:16:56.085: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:16:56.982: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:16:56.986: INFO: Number of nodes with available pods: 0 Mar 27 00:16:56.986: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:16:57.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:16:57.989: INFO: Number of nodes with available pods: 1 Mar 27 00:16:57.989: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:16:58.985: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:16:58.990: INFO: Number of nodes with available pods: 2 Mar 27 00:16:58.990: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 27 00:16:59.009: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:16:59.026: INFO: Number of nodes with available pods: 2 Mar 27 00:16:59.026: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5755, will wait for the garbage collector to delete the pods Mar 27 00:17:00.136: INFO: Deleting DaemonSet.extensions daemon-set took: 6.748312ms Mar 27 00:17:00.436: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.248177ms Mar 27 00:17:13.045: INFO: Number of nodes with available pods: 0 Mar 27 00:17:13.045: INFO: Number of running nodes: 0, number of available pods: 0 Mar 27 00:17:13.048: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5755/daemonsets","resourceVersion":"3076676"},"items":null} Mar 27 00:17:13.051: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5755/pods","resourceVersion":"3076676"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:17:13.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5755" for this suite. • [SLOW TEST:19.332 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":134,"skipped":2385,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:17:13.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 27 00:17:17.236: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:17:17.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4355" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2395,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:17:17.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 27 00:17:17.330: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 27 00:17:17.358: INFO: Waiting for terminating namespaces to be deleted... Mar 27 00:17:17.360: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 27 00:17:17.375: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:17:17.375: INFO: Container kindnet-cni ready: true, restart count 0 Mar 27 00:17:17.375: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:17:17.375: INFO: Container kube-proxy ready: true, restart count 0 Mar 27 00:17:17.375: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 27 00:17:17.380: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:17:17.380: INFO: Container kindnet-cni ready: true, restart count 0 Mar 27 00:17:17.380: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:17:17.380: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 27 00:17:17.444: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Mar 27 00:17:17.444: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Mar 27 00:17:17.444: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Mar 27 00:17:17.444: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Mar 27 00:17:17.444: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Mar 27 00:17:17.452: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-effe8602-b5ea-48bf-bd91-a5bf49603939.160001101a2e885f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8670/filler-pod-effe8602-b5ea-48bf-bd91-a5bf49603939 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-effe8602-b5ea-48bf-bd91-a5bf49603939.160001106788f013], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-effe8602-b5ea-48bf-bd91-a5bf49603939.16000110a699f4bb], Reason = [Created], Message = [Created container filler-pod-effe8602-b5ea-48bf-bd91-a5bf49603939] STEP: Considering event: Type = [Normal], Name = [filler-pod-effe8602-b5ea-48bf-bd91-a5bf49603939.16000110be661f7d], Reason = [Started], Message = [Started container filler-pod-effe8602-b5ea-48bf-bd91-a5bf49603939] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffe511a8-6adb-4154-a7a6-e4c3c1b58907.160001101998d849], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8670/filler-pod-ffe511a8-6adb-4154-a7a6-e4c3c1b58907 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffe511a8-6adb-4154-a7a6-e4c3c1b58907.1600011097b431a7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffe511a8-6adb-4154-a7a6-e4c3c1b58907.16000110bb808e60], Reason = [Created], Message = [Created container filler-pod-ffe511a8-6adb-4154-a7a6-e4c3c1b58907] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffe511a8-6adb-4154-a7a6-e4c3c1b58907.16000110cc0399b9], Reason = [Started], Message = [Started container filler-pod-ffe511a8-6adb-4154-a7a6-e4c3c1b58907] STEP: Considering event: Type = [Warning], Name = [additional-pod.16000111095009b8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160001110b8b60f6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:17:22.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8670" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.330 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":136,"skipped":2416,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:17:22.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 27 00:17:22.766: INFO: Waiting up to 5m0s for pod "pod-cb86d0f4-0d09-4996-81aa-bf459e4581d6" in namespace "emptydir-8380" to be "Succeeded or Failed" Mar 27 00:17:22.799: INFO: Pod "pod-cb86d0f4-0d09-4996-81aa-bf459e4581d6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.831595ms Mar 27 00:17:24.803: INFO: Pod "pod-cb86d0f4-0d09-4996-81aa-bf459e4581d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037693096s Mar 27 00:17:26.807: INFO: Pod "pod-cb86d0f4-0d09-4996-81aa-bf459e4581d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041262851s STEP: Saw pod success Mar 27 00:17:26.807: INFO: Pod "pod-cb86d0f4-0d09-4996-81aa-bf459e4581d6" satisfied condition "Succeeded or Failed" Mar 27 00:17:26.810: INFO: Trying to get logs from node latest-worker2 pod pod-cb86d0f4-0d09-4996-81aa-bf459e4581d6 container test-container: STEP: delete the pod Mar 27 00:17:26.838: INFO: Waiting for pod pod-cb86d0f4-0d09-4996-81aa-bf459e4581d6 to disappear Mar 27 00:17:26.854: INFO: Pod pod-cb86d0f4-0d09-4996-81aa-bf459e4581d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:17:26.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8380" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2434,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:17:26.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-d8l8 STEP: Creating a pod to test atomic-volume-subpath Mar 27 00:17:26.944: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-d8l8" in namespace "subpath-1744" to be "Succeeded or Failed" Mar 27 00:17:26.948: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221885ms Mar 27 00:17:28.952: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008505333s Mar 27 00:17:30.957: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Running", Reason="", readiness=true. Elapsed: 4.012881169s Mar 27 00:17:32.961: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Running", Reason="", readiness=true. Elapsed: 6.017419874s Mar 27 00:17:34.965: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Running", Reason="", readiness=true. Elapsed: 8.021401001s Mar 27 00:17:36.970: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Running", Reason="", readiness=true. Elapsed: 10.025865672s Mar 27 00:17:38.974: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Running", Reason="", readiness=true. Elapsed: 12.029969801s Mar 27 00:17:40.978: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Running", Reason="", readiness=true. Elapsed: 14.034277965s Mar 27 00:17:42.982: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Running", Reason="", readiness=true. Elapsed: 16.037695284s Mar 27 00:17:44.986: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Running", Reason="", readiness=true. Elapsed: 18.042140816s Mar 27 00:17:46.990: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Running", Reason="", readiness=true. Elapsed: 20.04631064s Mar 27 00:17:48.995: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Running", Reason="", readiness=true. Elapsed: 22.050702532s Mar 27 00:17:50.999: INFO: Pod "pod-subpath-test-secret-d8l8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055294234s STEP: Saw pod success Mar 27 00:17:50.999: INFO: Pod "pod-subpath-test-secret-d8l8" satisfied condition "Succeeded or Failed" Mar 27 00:17:51.003: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-d8l8 container test-container-subpath-secret-d8l8: STEP: delete the pod Mar 27 00:17:51.066: INFO: Waiting for pod pod-subpath-test-secret-d8l8 to disappear Mar 27 00:17:51.068: INFO: Pod pod-subpath-test-secret-d8l8 no longer exists STEP: Deleting pod pod-subpath-test-secret-d8l8 Mar 27 00:17:51.068: INFO: Deleting pod "pod-subpath-test-secret-d8l8" in namespace "subpath-1744" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:17:51.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1744" for this suite. • [SLOW TEST:24.218 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":138,"skipped":2437,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:17:51.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:18:07.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9305" for this suite. • [SLOW TEST:16.260 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":139,"skipped":2438,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:18:07.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 27 00:18:15.456: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 27 00:18:15.464: INFO: Pod pod-with-prestop-http-hook still exists Mar 27 00:18:17.464: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 27 00:18:17.468: INFO: Pod pod-with-prestop-http-hook still exists Mar 27 00:18:19.464: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 27 00:18:19.468: INFO: Pod pod-with-prestop-http-hook still exists Mar 27 00:18:21.464: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 27 00:18:21.468: INFO: Pod pod-with-prestop-http-hook still exists Mar 27 00:18:23.464: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 27 00:18:23.468: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:18:23.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1274" for this suite. • [SLOW TEST:16.144 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2448,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:18:23.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-k8bnq in namespace proxy-2131 I0327 00:18:23.608756 7 runners.go:190] Created replication controller with name: proxy-service-k8bnq, namespace: proxy-2131, replica count: 1 I0327 00:18:24.659253 7 runners.go:190] proxy-service-k8bnq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0327 00:18:25.659465 7 runners.go:190] proxy-service-k8bnq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0327 00:18:26.659694 7 runners.go:190] proxy-service-k8bnq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0327 00:18:27.659966 7 runners.go:190] proxy-service-k8bnq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0327 00:18:28.660158 7 runners.go:190] proxy-service-k8bnq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0327 00:18:29.660427 7 runners.go:190] proxy-service-k8bnq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 27 00:18:29.664: INFO: setup took 6.139330249s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 27 00:18:29.672: INFO: (0) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 7.167009ms) Mar 27 00:18:29.673: INFO: (0) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 7.724983ms) Mar 27 00:18:29.673: INFO: (0) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 8.453415ms) Mar 27 00:18:29.673: INFO: (0) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 8.951679ms) Mar 27 00:18:29.673: INFO: (0) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 8.898984ms) Mar 27 00:18:29.674: INFO: (0) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 9.156328ms) Mar 27 00:18:29.674: INFO: (0) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 8.969596ms) Mar 27 00:18:29.674: INFO: (0) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 9.279481ms) Mar 27 00:18:29.675: INFO: (0) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 10.133289ms) Mar 27 00:18:29.675: INFO: (0) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 10.302656ms) Mar 27 00:18:29.675: INFO: (0) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 10.741266ms) Mar 27 00:18:29.679: INFO: (0) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test<... (200; 3.482624ms) Mar 27 00:18:29.684: INFO: (1) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 3.626842ms) Mar 27 00:18:29.684: INFO: (1) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 3.605325ms) Mar 27 00:18:29.684: INFO: (1) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 3.698774ms) Mar 27 00:18:29.684: INFO: (1) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.741777ms) Mar 27 00:18:29.684: INFO: (1) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 4.088036ms) Mar 27 00:18:29.684: INFO: (1) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: ... (200; 4.437608ms) Mar 27 00:18:29.685: INFO: (1) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 4.624089ms) Mar 27 00:18:29.685: INFO: (1) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 4.633932ms) Mar 27 00:18:29.685: INFO: (1) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 4.607589ms) Mar 27 00:18:29.688: INFO: (2) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 3.070388ms) Mar 27 00:18:29.689: INFO: (2) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 3.566219ms) Mar 27 00:18:29.689: INFO: (2) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 3.575481ms) Mar 27 00:18:29.689: INFO: (2) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 3.928498ms) Mar 27 00:18:29.689: INFO: (2) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 4.137045ms) Mar 27 00:18:29.689: INFO: (2) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 4.126199ms) Mar 27 00:18:29.689: INFO: (2) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 4.182062ms) Mar 27 00:18:29.689: INFO: (2) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test (200; 4.795062ms) Mar 27 00:18:29.690: INFO: (2) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 4.852558ms) Mar 27 00:18:29.690: INFO: (2) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 5.05751ms) Mar 27 00:18:29.693: INFO: (3) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 3.281039ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 3.507716ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.743761ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 3.929607ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 3.861377ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 4.06915ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test (200; 4.324406ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 4.269513ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 4.273304ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 4.301277ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 4.2751ms) Mar 27 00:18:29.694: INFO: (3) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 4.343828ms) Mar 27 00:18:29.695: INFO: (3) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 4.411559ms) Mar 27 00:18:29.695: INFO: (3) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 4.663137ms) Mar 27 00:18:29.695: INFO: (3) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 4.895715ms) Mar 27 00:18:29.699: INFO: (4) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.928972ms) Mar 27 00:18:29.699: INFO: (4) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 4.031451ms) Mar 27 00:18:29.699: INFO: (4) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 4.068494ms) Mar 27 00:18:29.699: INFO: (4) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 4.128987ms) Mar 27 00:18:29.699: INFO: (4) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: ... (200; 2.300047ms) Mar 27 00:18:29.703: INFO: (5) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 2.328915ms) Mar 27 00:18:29.706: INFO: (5) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test (200; 5.673812ms) Mar 27 00:18:29.707: INFO: (5) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 5.74004ms) Mar 27 00:18:29.707: INFO: (5) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 5.883344ms) Mar 27 00:18:29.707: INFO: (5) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 5.934167ms) Mar 27 00:18:29.707: INFO: (5) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 5.970374ms) Mar 27 00:18:29.707: INFO: (5) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 5.923161ms) Mar 27 00:18:29.707: INFO: (5) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 5.94025ms) Mar 27 00:18:29.710: INFO: (6) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 2.634025ms) Mar 27 00:18:29.710: INFO: (6) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 2.658853ms) Mar 27 00:18:29.710: INFO: (6) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 2.888494ms) Mar 27 00:18:29.713: INFO: (6) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 5.771557ms) Mar 27 00:18:29.713: INFO: (6) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 6.22022ms) Mar 27 00:18:29.714: INFO: (6) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 6.860048ms) Mar 27 00:18:29.714: INFO: (6) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 7.099657ms) Mar 27 00:18:29.714: INFO: (6) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test (200; 7.180924ms) Mar 27 00:18:29.714: INFO: (6) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 7.251375ms) Mar 27 00:18:29.718: INFO: (6) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 10.76524ms) Mar 27 00:18:29.718: INFO: (6) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 10.936818ms) Mar 27 00:18:29.718: INFO: (6) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 10.949435ms) Mar 27 00:18:29.718: INFO: (6) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 10.950533ms) Mar 27 00:18:29.719: INFO: (6) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 11.701131ms) Mar 27 00:18:29.729: INFO: (7) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 9.729243ms) Mar 27 00:18:29.729: INFO: (7) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 9.899751ms) Mar 27 00:18:29.729: INFO: (7) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test (200; 9.94032ms) Mar 27 00:18:29.729: INFO: (7) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 9.914876ms) Mar 27 00:18:29.729: INFO: (7) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 9.982432ms) Mar 27 00:18:29.730: INFO: (7) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 10.617264ms) Mar 27 00:18:29.730: INFO: (7) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 10.577218ms) Mar 27 00:18:29.730: INFO: (7) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 10.617565ms) Mar 27 00:18:29.730: INFO: (7) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 10.956249ms) Mar 27 00:18:29.730: INFO: (7) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 11.030921ms) Mar 27 00:18:29.730: INFO: (7) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 11.248176ms) Mar 27 00:18:29.730: INFO: (7) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 11.397691ms) Mar 27 00:18:29.731: INFO: (7) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 11.697679ms) Mar 27 00:18:29.731: INFO: (7) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 12.007299ms) Mar 27 00:18:29.731: INFO: (7) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 12.434951ms) Mar 27 00:18:29.737: INFO: (8) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 4.992366ms) Mar 27 00:18:29.739: INFO: (8) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 7.794243ms) Mar 27 00:18:29.739: INFO: (8) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 7.74915ms) Mar 27 00:18:29.739: INFO: (8) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 7.812793ms) Mar 27 00:18:29.739: INFO: (8) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 7.880525ms) Mar 27 00:18:29.740: INFO: (8) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 8.469476ms) Mar 27 00:18:29.740: INFO: (8) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 8.484025ms) Mar 27 00:18:29.740: INFO: (8) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 8.428966ms) Mar 27 00:18:29.740: INFO: (8) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 8.441549ms) Mar 27 00:18:29.740: INFO: (8) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test (200; 8.542693ms) Mar 27 00:18:29.740: INFO: (8) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 8.570292ms) Mar 27 00:18:29.740: INFO: (8) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 8.626992ms) Mar 27 00:18:29.740: INFO: (8) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 8.665275ms) Mar 27 00:18:29.740: INFO: (8) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 8.702982ms) Mar 27 00:18:29.744: INFO: (9) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.323315ms) Mar 27 00:18:29.744: INFO: (9) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 3.412499ms) Mar 27 00:18:29.744: INFO: (9) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 3.787893ms) Mar 27 00:18:29.744: INFO: (9) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 3.859577ms) Mar 27 00:18:29.744: INFO: (9) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 3.821877ms) Mar 27 00:18:29.744: INFO: (9) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.84376ms) Mar 27 00:18:29.744: INFO: (9) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: ... (200; 4.106507ms) Mar 27 00:18:29.745: INFO: (9) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 4.159139ms) Mar 27 00:18:29.745: INFO: (9) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 4.285142ms) Mar 27 00:18:29.745: INFO: (9) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 4.831997ms) Mar 27 00:18:29.745: INFO: (9) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 5.045296ms) Mar 27 00:18:29.746: INFO: (9) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 5.278082ms) Mar 27 00:18:29.746: INFO: (9) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 5.27999ms) Mar 27 00:18:29.746: INFO: (9) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 5.742648ms) Mar 27 00:18:29.746: INFO: (9) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 6.040279ms) Mar 27 00:18:29.750: INFO: (10) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 3.24153ms) Mar 27 00:18:29.750: INFO: (10) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 3.22161ms) Mar 27 00:18:29.750: INFO: (10) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.798569ms) Mar 27 00:18:29.751: INFO: (10) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 4.173924ms) Mar 27 00:18:29.751: INFO: (10) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 4.299947ms) Mar 27 00:18:29.751: INFO: (10) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 4.331742ms) Mar 27 00:18:29.751: INFO: (10) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 4.366301ms) Mar 27 00:18:29.751: INFO: (10) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 4.397269ms) Mar 27 00:18:29.751: INFO: (10) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 4.398029ms) Mar 27 00:18:29.751: INFO: (10) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: ... (200; 6.504437ms) Mar 27 00:18:29.759: INFO: (11) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 6.425255ms) Mar 27 00:18:29.759: INFO: (11) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 6.463406ms) Mar 27 00:18:29.759: INFO: (11) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 6.549225ms) Mar 27 00:18:29.759: INFO: (11) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 6.463591ms) Mar 27 00:18:29.759: INFO: (11) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 6.576695ms) Mar 27 00:18:29.759: INFO: (11) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test<... (200; 4.372516ms) Mar 27 00:18:29.763: INFO: (12) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 4.375368ms) Mar 27 00:18:29.763: INFO: (12) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 4.404293ms) Mar 27 00:18:29.764: INFO: (12) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 4.927642ms) Mar 27 00:18:29.764: INFO: (12) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 5.22596ms) Mar 27 00:18:29.764: INFO: (12) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 5.320821ms) Mar 27 00:18:29.764: INFO: (12) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 5.529397ms) Mar 27 00:18:29.764: INFO: (12) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 5.526342ms) Mar 27 00:18:29.764: INFO: (12) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 5.527137ms) Mar 27 00:18:29.765: INFO: (12) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 5.742408ms) Mar 27 00:18:29.765: INFO: (12) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 6.068861ms) Mar 27 00:18:29.765: INFO: (12) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 6.1123ms) Mar 27 00:18:29.767: INFO: (13) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 2.080177ms) Mar 27 00:18:29.768: INFO: (13) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 2.788456ms) Mar 27 00:18:29.768: INFO: (13) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 2.777212ms) Mar 27 00:18:29.768: INFO: (13) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 2.848957ms) Mar 27 00:18:29.769: INFO: (13) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 3.947547ms) Mar 27 00:18:29.769: INFO: (13) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 4.012802ms) Mar 27 00:18:29.769: INFO: (13) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 4.094796ms) Mar 27 00:18:29.769: INFO: (13) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 4.239283ms) Mar 27 00:18:29.769: INFO: (13) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 4.255111ms) Mar 27 00:18:29.769: INFO: (13) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 4.337962ms) Mar 27 00:18:29.769: INFO: (13) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 4.366516ms) Mar 27 00:18:29.769: INFO: (13) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 4.291078ms) Mar 27 00:18:29.770: INFO: (13) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 4.372372ms) Mar 27 00:18:29.770: INFO: (13) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 4.374492ms) Mar 27 00:18:29.770: INFO: (13) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: ... (200; 2.474191ms) Mar 27 00:18:29.772: INFO: (14) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test (200; 4.137122ms) Mar 27 00:18:29.775: INFO: (14) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 5.319692ms) Mar 27 00:18:29.775: INFO: (14) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 5.595967ms) Mar 27 00:18:29.776: INFO: (14) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 5.781601ms) Mar 27 00:18:29.776: INFO: (14) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 6.008493ms) Mar 27 00:18:29.776: INFO: (14) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 6.031331ms) Mar 27 00:18:29.776: INFO: (14) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 6.09168ms) Mar 27 00:18:29.776: INFO: (14) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 6.013691ms) Mar 27 00:18:29.776: INFO: (14) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 6.033974ms) Mar 27 00:18:29.776: INFO: (14) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 6.042336ms) Mar 27 00:18:29.776: INFO: (14) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 6.10747ms) Mar 27 00:18:29.776: INFO: (14) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 6.065281ms) Mar 27 00:18:29.776: INFO: (14) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 6.100798ms) Mar 27 00:18:29.779: INFO: (15) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 2.961778ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 3.737183ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 3.728497ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 3.769053ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 3.801557ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 3.850485ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 3.870874ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 3.898038ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.879514ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 3.88952ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 3.91534ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: ... (200; 4.15687ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 4.195642ms) Mar 27 00:18:29.780: INFO: (15) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 4.399342ms) Mar 27 00:18:29.784: INFO: (16) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.314971ms) Mar 27 00:18:29.784: INFO: (16) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 3.540507ms) Mar 27 00:18:29.784: INFO: (16) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 3.642493ms) Mar 27 00:18:29.784: INFO: (16) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 3.58332ms) Mar 27 00:18:29.784: INFO: (16) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.645409ms) Mar 27 00:18:29.784: INFO: (16) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 3.707738ms) Mar 27 00:18:29.784: INFO: (16) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 3.708005ms) Mar 27 00:18:29.784: INFO: (16) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: ... (200; 3.901641ms) Mar 27 00:18:29.784: INFO: (16) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 3.974422ms) Mar 27 00:18:29.785: INFO: (16) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 4.252796ms) Mar 27 00:18:29.785: INFO: (16) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 4.989449ms) Mar 27 00:18:29.786: INFO: (16) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 5.006229ms) Mar 27 00:18:29.786: INFO: (16) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 5.054826ms) Mar 27 00:18:29.786: INFO: (16) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 5.047156ms) Mar 27 00:18:29.786: INFO: (16) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 5.054539ms) Mar 27 00:18:29.789: INFO: (17) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 2.941879ms) Mar 27 00:18:29.789: INFO: (17) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 3.254674ms) Mar 27 00:18:29.789: INFO: (17) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 3.253891ms) Mar 27 00:18:29.789: INFO: (17) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.332314ms) Mar 27 00:18:29.789: INFO: (17) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 3.319336ms) Mar 27 00:18:29.789: INFO: (17) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 3.381867ms) Mar 27 00:18:29.789: INFO: (17) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 3.376231ms) Mar 27 00:18:29.789: INFO: (17) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 3.468362ms) Mar 27 00:18:29.789: INFO: (17) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test<... (200; 3.469765ms) Mar 27 00:18:29.790: INFO: (17) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname1/proxy/: foo (200; 4.320901ms) Mar 27 00:18:29.790: INFO: (17) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 4.320834ms) Mar 27 00:18:29.790: INFO: (17) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname1/proxy/: tls baz (200; 4.384251ms) Mar 27 00:18:29.790: INFO: (17) /api/v1/namespaces/proxy-2131/services/https:proxy-service-k8bnq:tlsportname2/proxy/: tls qux (200; 4.394007ms) Mar 27 00:18:29.790: INFO: (17) /api/v1/namespaces/proxy-2131/services/proxy-service-k8bnq:portname2/proxy/: bar (200; 4.395231ms) Mar 27 00:18:29.790: INFO: (17) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 4.500091ms) Mar 27 00:18:29.794: INFO: (18) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 3.330283ms) Mar 27 00:18:29.794: INFO: (18) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 3.583599ms) Mar 27 00:18:29.794: INFO: (18) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 3.573269ms) Mar 27 00:18:29.794: INFO: (18) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 3.552156ms) Mar 27 00:18:29.794: INFO: (18) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: test<... (200; 4.119809ms) Mar 27 00:18:29.794: INFO: (18) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 4.135751ms) Mar 27 00:18:29.797: INFO: (19) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:1080/proxy/: ... (200; 2.491591ms) Mar 27 00:18:29.797: INFO: (19) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:160/proxy/: foo (200; 2.641319ms) Mar 27 00:18:29.799: INFO: (19) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml/proxy/: test (200; 4.412382ms) Mar 27 00:18:29.799: INFO: (19) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname2/proxy/: bar (200; 4.488032ms) Mar 27 00:18:29.799: INFO: (19) /api/v1/namespaces/proxy-2131/services/http:proxy-service-k8bnq:portname1/proxy/: foo (200; 4.467122ms) Mar 27 00:18:29.799: INFO: (19) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 4.512729ms) Mar 27 00:18:29.799: INFO: (19) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:462/proxy/: tls qux (200; 4.586666ms) Mar 27 00:18:29.798: INFO: (19) /api/v1/namespaces/proxy-2131/pods/proxy-service-k8bnq-j7fml:1080/proxy/: test<... (200; 4.153817ms) Mar 27 00:18:29.799: INFO: (19) /api/v1/namespaces/proxy-2131/pods/http:proxy-service-k8bnq-j7fml:162/proxy/: bar (200; 4.554802ms) Mar 27 00:18:29.799: INFO: (19) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:460/proxy/: tls baz (200; 4.912935ms) Mar 27 00:18:29.799: INFO: (19) /api/v1/namespaces/proxy-2131/pods/https:proxy-service-k8bnq-j7fml:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:19:12.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6209" for this suite. • [SLOW TEST:29.597 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:19:12.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:19:16.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5682" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":143,"skipped":2510,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:19:16.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:19:17.429: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:19:19.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865157, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865157, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865157, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865157, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:19:22.469: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:19:22.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7178" for this suite. STEP: Destroying namespace "webhook-7178-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.001 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":144,"skipped":2521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:19:22.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-774d721c-7d43-4487-bef1-12e6378cf72f in namespace container-probe-2481 Mar 27 00:19:26.735: INFO: Started pod busybox-774d721c-7d43-4487-bef1-12e6378cf72f in namespace container-probe-2481 STEP: checking the pod's current state and verifying that restartCount is present Mar 27 00:19:26.738: INFO: Initial restart count of pod busybox-774d721c-7d43-4487-bef1-12e6378cf72f is 0 Mar 27 00:20:16.851: INFO: Restart count of pod container-probe-2481/busybox-774d721c-7d43-4487-bef1-12e6378cf72f is now 1 (50.112755053s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:20:16.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2481" for this suite. • [SLOW TEST:54.306 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2560,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:20:16.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:20:16.988: INFO: Create a RollingUpdate DaemonSet Mar 27 00:20:16.991: INFO: Check that daemon pods launch on every node of the cluster Mar 27 00:20:17.037: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:20:17.040: INFO: Number of nodes with available pods: 0 Mar 27 00:20:17.040: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:20:18.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:20:18.049: INFO: Number of nodes with available pods: 0 Mar 27 00:20:18.049: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:20:19.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:20:19.047: INFO: Number of nodes with available pods: 0 Mar 27 00:20:19.047: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:20:20.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:20:20.049: INFO: Number of nodes with available pods: 0 Mar 27 00:20:20.049: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:20:21.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:20:21.049: INFO: Number of nodes with available pods: 1 Mar 27 00:20:21.049: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:20:22.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:20:22.046: INFO: Number of nodes with available pods: 2 Mar 27 00:20:22.047: INFO: Number of running nodes: 2, number of available pods: 2 Mar 27 00:20:22.047: INFO: Update the DaemonSet to trigger a rollout Mar 27 00:20:22.052: INFO: Updating DaemonSet daemon-set Mar 27 00:20:25.066: INFO: Roll back the DaemonSet before rollout is complete Mar 27 00:20:25.072: INFO: Updating DaemonSet daemon-set Mar 27 00:20:25.072: INFO: Make sure DaemonSet rollback is complete Mar 27 00:20:25.078: INFO: Wrong image for pod: daemon-set-7ck29. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 27 00:20:25.078: INFO: Pod daemon-set-7ck29 is not available Mar 27 00:20:25.096: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:20:26.114: INFO: Wrong image for pod: daemon-set-7ck29. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 27 00:20:26.114: INFO: Pod daemon-set-7ck29 is not available Mar 27 00:20:26.118: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:20:27.122: INFO: Pod daemon-set-vkpnd is not available Mar 27 00:20:27.272: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4368, will wait for the garbage collector to delete the pods Mar 27 00:20:27.364: INFO: Deleting DaemonSet.extensions daemon-set took: 5.81393ms Mar 27 00:20:27.664: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.253874ms Mar 27 00:20:33.068: INFO: Number of nodes with available pods: 0 Mar 27 00:20:33.068: INFO: Number of running nodes: 0, number of available pods: 0 Mar 27 00:20:33.071: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4368/daemonsets","resourceVersion":"3077824"},"items":null} Mar 27 00:20:33.073: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4368/pods","resourceVersion":"3077824"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:20:33.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4368" for this suite. • [SLOW TEST:16.171 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":146,"skipped":2570,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:20:33.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-96192274-1f38-461d-ac69-cebcc3672cb2 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-96192274-1f38-461d-ac69-cebcc3672cb2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:20:39.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7800" for this suite. • [SLOW TEST:6.160 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:20:39.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 27 00:20:39.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1231' Mar 27 00:20:39.551: INFO: stderr: "" Mar 27 00:20:39.551: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 27 00:20:39.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1231' Mar 27 00:20:39.664: INFO: stderr: "" Mar 27 00:20:39.664: INFO: stdout: "update-demo-nautilus-ntb9c update-demo-nautilus-q8wjm " Mar 27 00:20:39.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ntb9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:20:39.760: INFO: stderr: "" Mar 27 00:20:39.760: INFO: stdout: "" Mar 27 00:20:39.760: INFO: update-demo-nautilus-ntb9c is created but not running Mar 27 00:20:44.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1231' Mar 27 00:20:44.851: INFO: stderr: "" Mar 27 00:20:44.851: INFO: stdout: "update-demo-nautilus-ntb9c update-demo-nautilus-q8wjm " Mar 27 00:20:44.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ntb9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:20:44.959: INFO: stderr: "" Mar 27 00:20:44.959: INFO: stdout: "true" Mar 27 00:20:44.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ntb9c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:20:45.050: INFO: stderr: "" Mar 27 00:20:45.050: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 27 00:20:45.050: INFO: validating pod update-demo-nautilus-ntb9c Mar 27 00:20:45.054: INFO: got data: { "image": "nautilus.jpg" } Mar 27 00:20:45.054: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 27 00:20:45.054: INFO: update-demo-nautilus-ntb9c is verified up and running Mar 27 00:20:45.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8wjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:20:45.139: INFO: stderr: "" Mar 27 00:20:45.140: INFO: stdout: "true" Mar 27 00:20:45.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8wjm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:20:45.233: INFO: stderr: "" Mar 27 00:20:45.233: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 27 00:20:45.233: INFO: validating pod update-demo-nautilus-q8wjm Mar 27 00:20:45.237: INFO: got data: { "image": "nautilus.jpg" } Mar 27 00:20:45.237: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 27 00:20:45.237: INFO: update-demo-nautilus-q8wjm is verified up and running STEP: scaling down the replication controller Mar 27 00:20:45.240: INFO: scanned /root for discovery docs: Mar 27 00:20:45.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1231' Mar 27 00:20:46.354: INFO: stderr: "" Mar 27 00:20:46.354: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 27 00:20:46.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1231' Mar 27 00:20:46.446: INFO: stderr: "" Mar 27 00:20:46.446: INFO: stdout: "update-demo-nautilus-ntb9c update-demo-nautilus-q8wjm " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 27 00:20:51.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1231' Mar 27 00:20:51.546: INFO: stderr: "" Mar 27 00:20:51.546: INFO: stdout: "update-demo-nautilus-ntb9c update-demo-nautilus-q8wjm " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 27 00:20:56.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1231' Mar 27 00:20:56.641: INFO: stderr: "" Mar 27 00:20:56.641: INFO: stdout: "update-demo-nautilus-q8wjm " Mar 27 00:20:56.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8wjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:20:56.738: INFO: stderr: "" Mar 27 00:20:56.738: INFO: stdout: "true" Mar 27 00:20:56.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8wjm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:20:56.836: INFO: stderr: "" Mar 27 00:20:56.836: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 27 00:20:56.836: INFO: validating pod update-demo-nautilus-q8wjm Mar 27 00:20:56.839: INFO: got data: { "image": "nautilus.jpg" } Mar 27 00:20:56.839: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 27 00:20:56.839: INFO: update-demo-nautilus-q8wjm is verified up and running STEP: scaling up the replication controller Mar 27 00:20:56.842: INFO: scanned /root for discovery docs: Mar 27 00:20:56.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1231' Mar 27 00:20:57.966: INFO: stderr: "" Mar 27 00:20:57.966: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 27 00:20:57.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1231' Mar 27 00:20:58.065: INFO: stderr: "" Mar 27 00:20:58.065: INFO: stdout: "update-demo-nautilus-dxf96 update-demo-nautilus-q8wjm " Mar 27 00:20:58.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dxf96 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:20:58.149: INFO: stderr: "" Mar 27 00:20:58.149: INFO: stdout: "" Mar 27 00:20:58.149: INFO: update-demo-nautilus-dxf96 is created but not running Mar 27 00:21:03.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1231' Mar 27 00:21:03.249: INFO: stderr: "" Mar 27 00:21:03.249: INFO: stdout: "update-demo-nautilus-dxf96 update-demo-nautilus-q8wjm " Mar 27 00:21:03.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dxf96 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:21:03.340: INFO: stderr: "" Mar 27 00:21:03.340: INFO: stdout: "true" Mar 27 00:21:03.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dxf96 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:21:03.426: INFO: stderr: "" Mar 27 00:21:03.426: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 27 00:21:03.426: INFO: validating pod update-demo-nautilus-dxf96 Mar 27 00:21:03.430: INFO: got data: { "image": "nautilus.jpg" } Mar 27 00:21:03.430: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 27 00:21:03.430: INFO: update-demo-nautilus-dxf96 is verified up and running Mar 27 00:21:03.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8wjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:21:03.526: INFO: stderr: "" Mar 27 00:21:03.526: INFO: stdout: "true" Mar 27 00:21:03.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q8wjm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1231' Mar 27 00:21:03.610: INFO: stderr: "" Mar 27 00:21:03.610: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 27 00:21:03.610: INFO: validating pod update-demo-nautilus-q8wjm Mar 27 00:21:03.613: INFO: got data: { "image": "nautilus.jpg" } Mar 27 00:21:03.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 27 00:21:03.613: INFO: update-demo-nautilus-q8wjm is verified up and running STEP: using delete to clean up resources Mar 27 00:21:03.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1231' Mar 27 00:21:03.706: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 27 00:21:03.706: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 27 00:21:03.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1231' Mar 27 00:21:03.798: INFO: stderr: "No resources found in kubectl-1231 namespace.\n" Mar 27 00:21:03.798: INFO: stdout: "" Mar 27 00:21:03.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1231 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 27 00:21:03.884: INFO: stderr: "" Mar 27 00:21:03.884: INFO: stdout: "update-demo-nautilus-dxf96\nupdate-demo-nautilus-q8wjm\n" Mar 27 00:21:04.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1231' Mar 27 00:21:04.478: INFO: stderr: "No resources found in kubectl-1231 namespace.\n" Mar 27 00:21:04.478: INFO: stdout: "" Mar 27 00:21:04.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1231 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 27 00:21:04.564: INFO: stderr: "" Mar 27 00:21:04.564: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:21:04.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1231" for this suite. • [SLOW TEST:25.321 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":148,"skipped":2615,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:21:04.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:21:04.806: INFO: Waiting up to 5m0s for pod "downwardapi-volume-918c4869-b7f3-4bac-b4d7-f91b187a66d6" in namespace "downward-api-8837" to be "Succeeded or Failed" Mar 27 00:21:04.809: INFO: Pod "downwardapi-volume-918c4869-b7f3-4bac-b4d7-f91b187a66d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.633699ms Mar 27 00:21:06.813: INFO: Pod "downwardapi-volume-918c4869-b7f3-4bac-b4d7-f91b187a66d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007551917s Mar 27 00:21:08.818: INFO: Pod "downwardapi-volume-918c4869-b7f3-4bac-b4d7-f91b187a66d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011813132s STEP: Saw pod success Mar 27 00:21:08.818: INFO: Pod "downwardapi-volume-918c4869-b7f3-4bac-b4d7-f91b187a66d6" satisfied condition "Succeeded or Failed" Mar 27 00:21:08.821: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-918c4869-b7f3-4bac-b4d7-f91b187a66d6 container client-container: STEP: delete the pod Mar 27 00:21:08.883: INFO: Waiting for pod downwardapi-volume-918c4869-b7f3-4bac-b4d7-f91b187a66d6 to disappear Mar 27 00:21:08.894: INFO: Pod downwardapi-volume-918c4869-b7f3-4bac-b4d7-f91b187a66d6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:21:08.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8837" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2632,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:21:08.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:21:24.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4169" for this suite. STEP: Destroying namespace "nsdeletetest-7190" for this suite. Mar 27 00:21:24.206: INFO: Namespace nsdeletetest-7190 was already deleted STEP: Destroying namespace "nsdeletetest-4960" for this suite. • [SLOW TEST:15.308 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":150,"skipped":2633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:21:24.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0327 00:21:25.315489 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 27 00:21:25.315: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:21:25.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2466" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":151,"skipped":2658,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:21:25.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:21:25.392: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6a41193-1517-4a40-b3ed-afc10a3775df" in namespace "projected-1395" to be "Succeeded or Failed" Mar 27 00:21:25.462: INFO: Pod "downwardapi-volume-d6a41193-1517-4a40-b3ed-afc10a3775df": Phase="Pending", Reason="", readiness=false. Elapsed: 70.147811ms Mar 27 00:21:27.466: INFO: Pod "downwardapi-volume-d6a41193-1517-4a40-b3ed-afc10a3775df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073850444s Mar 27 00:21:29.470: INFO: Pod "downwardapi-volume-d6a41193-1517-4a40-b3ed-afc10a3775df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077930211s STEP: Saw pod success Mar 27 00:21:29.470: INFO: Pod "downwardapi-volume-d6a41193-1517-4a40-b3ed-afc10a3775df" satisfied condition "Succeeded or Failed" Mar 27 00:21:29.473: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d6a41193-1517-4a40-b3ed-afc10a3775df container client-container: STEP: delete the pod Mar 27 00:21:29.489: INFO: Waiting for pod downwardapi-volume-d6a41193-1517-4a40-b3ed-afc10a3775df to disappear Mar 27 00:21:29.493: INFO: Pod downwardapi-volume-d6a41193-1517-4a40-b3ed-afc10a3775df no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:21:29.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1395" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:21:29.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:21:30.413: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:21:32.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865290, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865290, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865290, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865290, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:21:35.457: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 27 00:21:35.487: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:21:35.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1094" for this suite. STEP: Destroying namespace "webhook-1094-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.101 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":153,"skipped":2716,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:21:35.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:21:35.674: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b5ea7ae-8c36-4b27-8afc-7f7f9defd5e4" in namespace "downward-api-550" to be "Succeeded or Failed" Mar 27 00:21:35.745: INFO: Pod "downwardapi-volume-5b5ea7ae-8c36-4b27-8afc-7f7f9defd5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 70.856881ms Mar 27 00:21:37.748: INFO: Pod "downwardapi-volume-5b5ea7ae-8c36-4b27-8afc-7f7f9defd5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0745377s Mar 27 00:21:39.753: INFO: Pod "downwardapi-volume-5b5ea7ae-8c36-4b27-8afc-7f7f9defd5e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079022668s STEP: Saw pod success Mar 27 00:21:39.753: INFO: Pod "downwardapi-volume-5b5ea7ae-8c36-4b27-8afc-7f7f9defd5e4" satisfied condition "Succeeded or Failed" Mar 27 00:21:39.756: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-5b5ea7ae-8c36-4b27-8afc-7f7f9defd5e4 container client-container: STEP: delete the pod Mar 27 00:21:39.776: INFO: Waiting for pod downwardapi-volume-5b5ea7ae-8c36-4b27-8afc-7f7f9defd5e4 to disappear Mar 27 00:21:39.780: INFO: Pod downwardapi-volume-5b5ea7ae-8c36-4b27-8afc-7f7f9defd5e4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:21:39.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-550" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2719,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:21:39.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:21:39.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-966a5953-9d2d-48c0-8891-d26bbbbd876b" in namespace "projected-7404" to be "Succeeded or Failed" Mar 27 00:21:39.869: INFO: Pod "downwardapi-volume-966a5953-9d2d-48c0-8891-d26bbbbd876b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.373401ms Mar 27 00:21:41.873: INFO: Pod "downwardapi-volume-966a5953-9d2d-48c0-8891-d26bbbbd876b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015234752s Mar 27 00:21:43.882: INFO: Pod "downwardapi-volume-966a5953-9d2d-48c0-8891-d26bbbbd876b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024005632s STEP: Saw pod success Mar 27 00:21:43.882: INFO: Pod "downwardapi-volume-966a5953-9d2d-48c0-8891-d26bbbbd876b" satisfied condition "Succeeded or Failed" Mar 27 00:21:43.885: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-966a5953-9d2d-48c0-8891-d26bbbbd876b container client-container: STEP: delete the pod Mar 27 00:21:43.920: INFO: Waiting for pod downwardapi-volume-966a5953-9d2d-48c0-8891-d26bbbbd876b to disappear Mar 27 00:21:43.924: INFO: Pod downwardapi-volume-966a5953-9d2d-48c0-8891-d26bbbbd876b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:21:43.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7404" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2719,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:21:43.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:21:44.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e30de878-36ee-4fed-903a-e1dcab16e4c1" in namespace "projected-7636" to be "Succeeded or Failed" Mar 27 00:21:44.022: INFO: Pod "downwardapi-volume-e30de878-36ee-4fed-903a-e1dcab16e4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.003878ms Mar 27 00:21:46.026: INFO: Pod "downwardapi-volume-e30de878-36ee-4fed-903a-e1dcab16e4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022232532s Mar 27 00:21:48.030: INFO: Pod "downwardapi-volume-e30de878-36ee-4fed-903a-e1dcab16e4c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026674405s STEP: Saw pod success Mar 27 00:21:48.030: INFO: Pod "downwardapi-volume-e30de878-36ee-4fed-903a-e1dcab16e4c1" satisfied condition "Succeeded or Failed" Mar 27 00:21:48.034: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e30de878-36ee-4fed-903a-e1dcab16e4c1 container client-container: STEP: delete the pod Mar 27 00:21:48.062: INFO: Waiting for pod downwardapi-volume-e30de878-36ee-4fed-903a-e1dcab16e4c1 to disappear Mar 27 00:21:48.098: INFO: Pod downwardapi-volume-e30de878-36ee-4fed-903a-e1dcab16e4c1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:21:48.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7636" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:21:48.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 27 00:21:48.206: INFO: Waiting up to 5m0s for pod "pod-7b371177-91dc-40a0-8942-427c8e46b555" in namespace "emptydir-9730" to be "Succeeded or Failed" Mar 27 00:21:48.230: INFO: Pod "pod-7b371177-91dc-40a0-8942-427c8e46b555": Phase="Pending", Reason="", readiness=false. Elapsed: 24.373739ms Mar 27 00:21:50.235: INFO: Pod "pod-7b371177-91dc-40a0-8942-427c8e46b555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028823477s Mar 27 00:21:52.240: INFO: Pod "pod-7b371177-91dc-40a0-8942-427c8e46b555": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033843809s STEP: Saw pod success Mar 27 00:21:52.240: INFO: Pod "pod-7b371177-91dc-40a0-8942-427c8e46b555" satisfied condition "Succeeded or Failed" Mar 27 00:21:52.243: INFO: Trying to get logs from node latest-worker pod pod-7b371177-91dc-40a0-8942-427c8e46b555 container test-container: STEP: delete the pod Mar 27 00:21:52.274: INFO: Waiting for pod pod-7b371177-91dc-40a0-8942-427c8e46b555 to disappear Mar 27 00:21:52.283: INFO: Pod pod-7b371177-91dc-40a0-8942-427c8e46b555 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:21:52.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9730" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2775,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:21:52.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8648 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 27 00:21:52.432: INFO: Found 0 stateful pods, waiting for 3 Mar 27 00:22:02.439: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 27 00:22:02.439: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 27 00:22:02.439: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 27 00:22:02.463: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 27 00:22:12.503: INFO: Updating stateful set ss2 Mar 27 00:22:12.519: INFO: Waiting for Pod statefulset-8648/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 27 00:22:22.757: INFO: Found 2 stateful pods, waiting for 3 Mar 27 00:22:32.762: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 27 00:22:32.762: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 27 00:22:32.762: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 27 00:22:32.784: INFO: Updating stateful set ss2 Mar 27 00:22:32.834: INFO: Waiting for Pod statefulset-8648/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 27 00:22:42.858: INFO: Updating stateful set ss2 Mar 27 00:22:42.884: INFO: Waiting for StatefulSet statefulset-8648/ss2 to complete update Mar 27 00:22:42.884: INFO: Waiting for Pod statefulset-8648/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 27 00:22:52.892: INFO: Deleting all statefulset in ns statefulset-8648 Mar 27 00:22:52.894: INFO: Scaling statefulset ss2 to 0 Mar 27 00:23:12.911: INFO: Waiting for statefulset status.replicas updated to 0 Mar 27 00:23:12.914: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:23:12.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8648" for this suite. • [SLOW TEST:80.654 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":158,"skipped":2797,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:23:12.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-8308f288-1f0e-40eb-9f94-609b6d12b971 STEP: Creating a pod to test consume configMaps Mar 27 00:23:13.073: INFO: Waiting up to 5m0s for pod "pod-configmaps-853603b7-68ba-42a9-8d52-066b6fc10a63" in namespace "configmap-5916" to be "Succeeded or Failed" Mar 27 00:23:13.082: INFO: Pod "pod-configmaps-853603b7-68ba-42a9-8d52-066b6fc10a63": Phase="Pending", Reason="", readiness=false. Elapsed: 9.927253ms Mar 27 00:23:15.086: INFO: Pod "pod-configmaps-853603b7-68ba-42a9-8d52-066b6fc10a63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01339269s Mar 27 00:23:17.090: INFO: Pod "pod-configmaps-853603b7-68ba-42a9-8d52-066b6fc10a63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017013017s STEP: Saw pod success Mar 27 00:23:17.090: INFO: Pod "pod-configmaps-853603b7-68ba-42a9-8d52-066b6fc10a63" satisfied condition "Succeeded or Failed" Mar 27 00:23:17.092: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-853603b7-68ba-42a9-8d52-066b6fc10a63 container configmap-volume-test: STEP: delete the pod Mar 27 00:23:17.140: INFO: Waiting for pod pod-configmaps-853603b7-68ba-42a9-8d52-066b6fc10a63 to disappear Mar 27 00:23:17.155: INFO: Pod pod-configmaps-853603b7-68ba-42a9-8d52-066b6fc10a63 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:23:17.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5916" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2811,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:23:17.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 27 00:23:17.313: INFO: Waiting up to 5m0s for pod "pod-c157f37e-3e8f-4834-a787-00cbcd195ede" in namespace "emptydir-9115" to be "Succeeded or Failed" Mar 27 00:23:17.316: INFO: Pod "pod-c157f37e-3e8f-4834-a787-00cbcd195ede": Phase="Pending", Reason="", readiness=false. Elapsed: 3.431319ms Mar 27 00:23:19.327: INFO: Pod "pod-c157f37e-3e8f-4834-a787-00cbcd195ede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013981788s Mar 27 00:23:21.331: INFO: Pod "pod-c157f37e-3e8f-4834-a787-00cbcd195ede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017961618s STEP: Saw pod success Mar 27 00:23:21.331: INFO: Pod "pod-c157f37e-3e8f-4834-a787-00cbcd195ede" satisfied condition "Succeeded or Failed" Mar 27 00:23:21.334: INFO: Trying to get logs from node latest-worker pod pod-c157f37e-3e8f-4834-a787-00cbcd195ede container test-container: STEP: delete the pod Mar 27 00:23:21.393: INFO: Waiting for pod pod-c157f37e-3e8f-4834-a787-00cbcd195ede to disappear Mar 27 00:23:21.399: INFO: Pod pod-c157f37e-3e8f-4834-a787-00cbcd195ede no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:23:21.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9115" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2812,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:23:21.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:23:21.445: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 27 00:23:23.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3616 create -f -' Mar 27 00:23:26.407: INFO: stderr: "" Mar 27 00:23:26.407: INFO: stdout: "e2e-test-crd-publish-openapi-1413-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 27 00:23:26.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3616 delete e2e-test-crd-publish-openapi-1413-crds test-cr' Mar 27 00:23:26.522: INFO: stderr: "" Mar 27 00:23:26.522: INFO: stdout: "e2e-test-crd-publish-openapi-1413-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 27 00:23:26.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3616 apply -f -' Mar 27 00:23:26.757: INFO: stderr: "" Mar 27 00:23:26.757: INFO: stdout: "e2e-test-crd-publish-openapi-1413-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 27 00:23:26.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3616 delete e2e-test-crd-publish-openapi-1413-crds test-cr' Mar 27 00:23:26.870: INFO: stderr: "" Mar 27 00:23:26.870: INFO: stdout: "e2e-test-crd-publish-openapi-1413-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 27 00:23:26.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1413-crds' Mar 27 00:23:27.110: INFO: stderr: "" Mar 27 00:23:27.110: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1413-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:23:29.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3616" for this suite. • [SLOW TEST:7.609 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":161,"skipped":2832,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:23:29.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:23:34.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5876" for this suite. • [SLOW TEST:5.159 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":162,"skipped":2852,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:23:34.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:23:34.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5338" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2863,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:23:34.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:23:38.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7884" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2890,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:23:38.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:23:39.182: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:23:41.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865419, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865419, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865419, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865419, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:23:44.211: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:23:44.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7096" for this suite. STEP: Destroying namespace "webhook-7096-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.037 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":165,"skipped":2928,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:23:44.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 27 00:23:49.593: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:23:49.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8810" for this suite. • [SLOW TEST:5.176 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":166,"skipped":2964,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:23:49.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-dda68ce1-d97d-46ff-9a24-c8cd69f546c1 STEP: Creating configMap with name cm-test-opt-upd-45a98b5c-d24c-4e29-9cf8-338821c4d8b2 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-dda68ce1-d97d-46ff-9a24-c8cd69f546c1 STEP: Updating configmap cm-test-opt-upd-45a98b5c-d24c-4e29-9cf8-338821c4d8b2 STEP: Creating configMap with name cm-test-opt-create-71fead08-1ae3-481c-9391-eb464603cd87 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:25:20.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2697" for this suite. • [SLOW TEST:90.706 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2986,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:25:20.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 27 00:25:24.466: INFO: &Pod{ObjectMeta:{send-events-9c97fd95-c2e5-4053-bdf4-cc4b0047aeaf events-1191 /api/v1/namespaces/events-1191/pods/send-events-9c97fd95-c2e5-4053-bdf4-cc4b0047aeaf bba5e8d2-bd4a-4809-b9b4-1ebf36592629 3079662 0 2020-03-27 00:25:20 +0000 UTC map[name:foo time:442418233] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dtp2j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dtp2j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dtp2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:25:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:25:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:25:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:25:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.142,StartTime:2020-03-27 00:25:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:25:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://e2f9fb36f75117df0700a65fb9f1b3ce230b990cbce9ab7ba5f58aee467bfc81,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.142,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 27 00:25:26.471: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 27 00:25:28.475: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:25:28.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1191" for this suite. • [SLOW TEST:8.145 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":168,"skipped":2992,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:25:28.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Mar 27 00:25:28.570: INFO: Waiting up to 5m0s for pod "pod-733e9cd0-ef2a-4f3c-823d-13554216f76f" in namespace "emptydir-7864" to be "Succeeded or Failed" Mar 27 00:25:28.575: INFO: Pod "pod-733e9cd0-ef2a-4f3c-823d-13554216f76f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.513089ms Mar 27 00:25:30.591: INFO: Pod "pod-733e9cd0-ef2a-4f3c-823d-13554216f76f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020967778s Mar 27 00:25:32.596: INFO: Pod "pod-733e9cd0-ef2a-4f3c-823d-13554216f76f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025752895s STEP: Saw pod success Mar 27 00:25:32.596: INFO: Pod "pod-733e9cd0-ef2a-4f3c-823d-13554216f76f" satisfied condition "Succeeded or Failed" Mar 27 00:25:32.600: INFO: Trying to get logs from node latest-worker pod pod-733e9cd0-ef2a-4f3c-823d-13554216f76f container test-container: STEP: delete the pod Mar 27 00:25:32.648: INFO: Waiting for pod pod-733e9cd0-ef2a-4f3c-823d-13554216f76f to disappear Mar 27 00:25:32.658: INFO: Pod pod-733e9cd0-ef2a-4f3c-823d-13554216f76f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:25:32.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7864" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:25:32.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1146 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 27 00:25:32.750: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 27 00:25:32.766: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 27 00:25:34.774: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 27 00:25:36.771: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:25:38.770: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:25:40.771: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:25:42.771: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:25:44.770: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:25:46.771: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:25:48.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:25:50.771: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:25:52.771: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:25:54.770: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 27 00:25:54.774: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 27 00:25:58.957: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.144 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1146 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:25:58.957: INFO: >>> kubeConfig: /root/.kube/config I0327 00:25:58.997328 7 log.go:172] (0xc0026c8420) (0xc001ed2b40) Create stream I0327 00:25:58.997374 7 log.go:172] (0xc0026c8420) (0xc001ed2b40) Stream added, broadcasting: 1 I0327 00:25:58.999671 7 log.go:172] (0xc0026c8420) Reply frame received for 1 I0327 00:25:58.999715 7 log.go:172] (0xc0026c8420) (0xc001ed2c80) Create stream I0327 00:25:58.999738 7 log.go:172] (0xc0026c8420) (0xc001ed2c80) Stream added, broadcasting: 3 I0327 00:25:59.000534 7 log.go:172] (0xc0026c8420) Reply frame received for 3 I0327 00:25:59.000567 7 log.go:172] (0xc0026c8420) (0xc001ed2d20) Create stream I0327 00:25:59.000579 7 log.go:172] (0xc0026c8420) (0xc001ed2d20) Stream added, broadcasting: 5 I0327 00:25:59.001579 7 log.go:172] (0xc0026c8420) Reply frame received for 5 I0327 00:26:00.055029 7 log.go:172] (0xc0026c8420) Data frame received for 5 I0327 00:26:00.055075 7 log.go:172] (0xc0026c8420) Data frame received for 3 I0327 00:26:00.055114 7 log.go:172] (0xc001ed2c80) (3) Data frame handling I0327 00:26:00.055125 7 log.go:172] (0xc001ed2c80) (3) Data frame sent I0327 00:26:00.055134 7 log.go:172] (0xc0026c8420) Data frame received for 3 I0327 00:26:00.055145 7 log.go:172] (0xc001ed2c80) (3) Data frame handling I0327 00:26:00.055170 7 log.go:172] (0xc001ed2d20) (5) Data frame handling I0327 00:26:00.057079 7 log.go:172] (0xc0026c8420) Data frame received for 1 I0327 00:26:00.057092 7 log.go:172] (0xc001ed2b40) (1) Data frame handling I0327 00:26:00.057098 7 log.go:172] (0xc001ed2b40) (1) Data frame sent I0327 00:26:00.057105 7 log.go:172] (0xc0026c8420) (0xc001ed2b40) Stream removed, broadcasting: 1 I0327 00:26:00.057204 7 log.go:172] (0xc0026c8420) Go away received I0327 00:26:00.057272 7 log.go:172] (0xc0026c8420) (0xc001ed2b40) Stream removed, broadcasting: 1 I0327 00:26:00.057298 7 log.go:172] (0xc0026c8420) (0xc001ed2c80) Stream removed, broadcasting: 3 I0327 00:26:00.057309 7 log.go:172] (0xc0026c8420) (0xc001ed2d20) Stream removed, broadcasting: 5 Mar 27 00:26:00.057: INFO: Found all expected endpoints: [netserver-0] Mar 27 00:26:00.064: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.21 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1146 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:26:00.064: INFO: >>> kubeConfig: /root/.kube/config I0327 00:26:00.091038 7 log.go:172] (0xc002c0a790) (0xc00238af00) Create stream I0327 00:26:00.091061 7 log.go:172] (0xc002c0a790) (0xc00238af00) Stream added, broadcasting: 1 I0327 00:26:00.093983 7 log.go:172] (0xc002c0a790) Reply frame received for 1 I0327 00:26:00.094043 7 log.go:172] (0xc002c0a790) (0xc0016c9540) Create stream I0327 00:26:00.094060 7 log.go:172] (0xc002c0a790) (0xc0016c9540) Stream added, broadcasting: 3 I0327 00:26:00.095095 7 log.go:172] (0xc002c0a790) Reply frame received for 3 I0327 00:26:00.095155 7 log.go:172] (0xc002c0a790) (0xc0016c95e0) Create stream I0327 00:26:00.095175 7 log.go:172] (0xc002c0a790) (0xc0016c95e0) Stream added, broadcasting: 5 I0327 00:26:00.096229 7 log.go:172] (0xc002c0a790) Reply frame received for 5 I0327 00:26:01.169278 7 log.go:172] (0xc002c0a790) Data frame received for 3 I0327 00:26:01.169333 7 log.go:172] (0xc0016c9540) (3) Data frame handling I0327 00:26:01.169365 7 log.go:172] (0xc0016c9540) (3) Data frame sent I0327 00:26:01.169393 7 log.go:172] (0xc002c0a790) Data frame received for 3 I0327 00:26:01.169492 7 log.go:172] (0xc002c0a790) Data frame received for 5 I0327 00:26:01.169606 7 log.go:172] (0xc0016c95e0) (5) Data frame handling I0327 00:26:01.169649 7 log.go:172] (0xc0016c9540) (3) Data frame handling I0327 00:26:01.171679 7 log.go:172] (0xc002c0a790) Data frame received for 1 I0327 00:26:01.171717 7 log.go:172] (0xc00238af00) (1) Data frame handling I0327 00:26:01.171742 7 log.go:172] (0xc00238af00) (1) Data frame sent I0327 00:26:01.171759 7 log.go:172] (0xc002c0a790) (0xc00238af00) Stream removed, broadcasting: 1 I0327 00:26:01.171776 7 log.go:172] (0xc002c0a790) Go away received I0327 00:26:01.171873 7 log.go:172] (0xc002c0a790) (0xc00238af00) Stream removed, broadcasting: 1 I0327 00:26:01.171887 7 log.go:172] (0xc002c0a790) (0xc0016c9540) Stream removed, broadcasting: 3 I0327 00:26:01.171893 7 log.go:172] (0xc002c0a790) (0xc0016c95e0) Stream removed, broadcasting: 5 Mar 27 00:26:01.171: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:26:01.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1146" for this suite. • [SLOW TEST:28.514 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":3017,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:26:01.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0327 00:26:12.095599 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 27 00:26:12.095: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:26:12.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3644" for this suite. • [SLOW TEST:10.921 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":171,"skipped":3028,"failed":0} [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:26:12.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Mar 27 00:26:12.422: INFO: Waiting up to 5m0s for pod "client-containers-509f07be-e999-45e7-b613-129589419c24" in namespace "containers-8777" to be "Succeeded or Failed" Mar 27 00:26:12.444: INFO: Pod "client-containers-509f07be-e999-45e7-b613-129589419c24": Phase="Pending", Reason="", readiness=false. Elapsed: 21.943643ms Mar 27 00:26:14.449: INFO: Pod "client-containers-509f07be-e999-45e7-b613-129589419c24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02673577s Mar 27 00:26:16.454: INFO: Pod "client-containers-509f07be-e999-45e7-b613-129589419c24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031324887s STEP: Saw pod success Mar 27 00:26:16.454: INFO: Pod "client-containers-509f07be-e999-45e7-b613-129589419c24" satisfied condition "Succeeded or Failed" Mar 27 00:26:16.457: INFO: Trying to get logs from node latest-worker2 pod client-containers-509f07be-e999-45e7-b613-129589419c24 container test-container: STEP: delete the pod Mar 27 00:26:16.475: INFO: Waiting for pod client-containers-509f07be-e999-45e7-b613-129589419c24 to disappear Mar 27 00:26:16.486: INFO: Pod client-containers-509f07be-e999-45e7-b613-129589419c24 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:26:16.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8777" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":3028,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:26:16.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:26:16.583: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 27 00:26:21.586: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 27 00:26:21.586: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 27 00:26:23.590: INFO: Creating deployment "test-rollover-deployment" Mar 27 00:26:23.614: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 27 00:26:25.620: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 27 00:26:25.626: INFO: Ensure that both replica sets have 1 created replica Mar 27 00:26:25.630: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 27 00:26:25.636: INFO: Updating deployment test-rollover-deployment Mar 27 00:26:25.636: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 27 00:26:27.646: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 27 00:26:27.653: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 27 00:26:27.678: INFO: all replica sets need to contain the pod-template-hash label Mar 27 00:26:27.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865585, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 27 00:26:29.684: INFO: all replica sets need to contain the pod-template-hash label Mar 27 00:26:29.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865588, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 27 00:26:31.686: INFO: all replica sets need to contain the pod-template-hash label Mar 27 00:26:31.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865588, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 27 00:26:33.686: INFO: all replica sets need to contain the pod-template-hash label Mar 27 00:26:33.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865588, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 27 00:26:35.715: INFO: all replica sets need to contain the pod-template-hash label Mar 27 00:26:35.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865588, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 27 00:26:37.687: INFO: all replica sets need to contain the pod-template-hash label Mar 27 00:26:37.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865588, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865583, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 27 00:26:39.684: INFO: Mar 27 00:26:39.684: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 27 00:26:39.693: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7424 /apis/apps/v1/namespaces/deployment-7424/deployments/test-rollover-deployment fb13581f-83c1-46d4-bd81-5fa70a32acec 3080296 2 2020-03-27 00:26:23 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038f8988 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-27 00:26:23 +0000 UTC,LastTransitionTime:2020-03-27 00:26:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-03-27 00:26:39 +0000 UTC,LastTransitionTime:2020-03-27 00:26:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 27 00:26:39.696: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-7424 /apis/apps/v1/namespaces/deployment-7424/replicasets/test-rollover-deployment-78df7bc796 317a9555-9175-4a49-8f70-9d0766ee8f3c 3080285 2 2020-03-27 00:26:25 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment fb13581f-83c1-46d4-bd81-5fa70a32acec 0xc0038d1217 0xc0038d1218}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038d1288 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 27 00:26:39.696: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 27 00:26:39.697: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7424 /apis/apps/v1/namespaces/deployment-7424/replicasets/test-rollover-controller 7a959f43-44d7-4603-88eb-079b90435b90 3080294 2 2020-03-27 00:26:16 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment fb13581f-83c1-46d4-bd81-5fa70a32acec 0xc0038d1147 0xc0038d1148}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0038d11a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 27 00:26:39.697: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-7424 /apis/apps/v1/namespaces/deployment-7424/replicasets/test-rollover-deployment-f6c94f66c 624b9398-ee79-468c-99f5-7e422daa6060 3080236 2 2020-03-27 00:26:23 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment fb13581f-83c1-46d4-bd81-5fa70a32acec 0xc0038d12f0 0xc0038d12f1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038d1368 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 27 00:26:39.701: INFO: Pod "test-rollover-deployment-78df7bc796-295wv" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-295wv test-rollover-deployment-78df7bc796- deployment-7424 /api/v1/namespaces/deployment-7424/pods/test-rollover-deployment-78df7bc796-295wv c08029e3-01fe-4356-bed9-4f140c4ef309 3080253 0 2020-03-27 00:26:25 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 317a9555-9175-4a49-8f70-9d0766ee8f3c 0xc0038d1917 0xc0038d1918}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zfqqq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zfqqq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zfqqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:26:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:26:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:26:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:26:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.151,StartTime:2020-03-27 00:26:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:26:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://7698e9fca47f8edcf6e3601c69cce0d5bcb998e2eb17d05901e1de8f65a4d450,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.151,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:26:39.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7424" for this suite. • [SLOW TEST:23.215 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":173,"skipped":3064,"failed":0} [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:26:39.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:26:39.766: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-b6bd7623-8f25-46b2-9396-fdbb5edefae9" in namespace "security-context-test-3042" to be "Succeeded or Failed" Mar 27 00:26:39.776: INFO: Pod "alpine-nnp-false-b6bd7623-8f25-46b2-9396-fdbb5edefae9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.079702ms Mar 27 00:26:41.820: INFO: Pod "alpine-nnp-false-b6bd7623-8f25-46b2-9396-fdbb5edefae9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053625716s Mar 27 00:26:43.824: INFO: Pod "alpine-nnp-false-b6bd7623-8f25-46b2-9396-fdbb5edefae9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058027295s Mar 27 00:26:43.824: INFO: Pod "alpine-nnp-false-b6bd7623-8f25-46b2-9396-fdbb5edefae9" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:26:43.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3042" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3064,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:26:43.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 27 00:26:43.903: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:26:58.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3777" for this suite. • [SLOW TEST:14.320 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":175,"skipped":3065,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:26:58.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:27:14.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7231" for this suite. • [SLOW TEST:16.076 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":176,"skipped":3089,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:27:14.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 27 00:27:14.320: INFO: >>> kubeConfig: /root/.kube/config Mar 27 00:27:16.243: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:27:26.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8750" for this suite. • [SLOW TEST:12.597 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":177,"skipped":3093,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:27:26.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:27:30.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4165" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3112,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:27:30.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:27:44.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8695" for this suite. • [SLOW TEST:13.150 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":179,"skipped":3117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:27:44.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2871 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-2871 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2871 Mar 27 00:27:44.135: INFO: Found 0 stateful pods, waiting for 1 Mar 27 00:27:54.139: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 27 00:27:54.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2871 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 27 00:27:54.450: INFO: stderr: "I0327 00:27:54.286887 2345 log.go:172] (0xc0000e0bb0) (0xc0006db540) Create stream\nI0327 00:27:54.286947 2345 log.go:172] (0xc0000e0bb0) (0xc0006db540) Stream added, broadcasting: 1\nI0327 00:27:54.289659 2345 log.go:172] (0xc0000e0bb0) Reply frame received for 1\nI0327 00:27:54.289706 2345 log.go:172] (0xc0000e0bb0) (0xc000603540) Create stream\nI0327 00:27:54.289722 2345 log.go:172] (0xc0000e0bb0) (0xc000603540) Stream added, broadcasting: 3\nI0327 00:27:54.290858 2345 log.go:172] (0xc0000e0bb0) Reply frame received for 3\nI0327 00:27:54.290890 2345 log.go:172] (0xc0000e0bb0) (0xc00049a960) Create stream\nI0327 00:27:54.290919 2345 log.go:172] (0xc0000e0bb0) (0xc00049a960) Stream added, broadcasting: 5\nI0327 00:27:54.292024 2345 log.go:172] (0xc0000e0bb0) Reply frame received for 5\nI0327 00:27:54.376216 2345 log.go:172] (0xc0000e0bb0) Data frame received for 5\nI0327 00:27:54.376242 2345 log.go:172] (0xc00049a960) (5) Data frame handling\nI0327 00:27:54.376258 2345 log.go:172] (0xc00049a960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0327 00:27:54.442390 2345 log.go:172] (0xc0000e0bb0) Data frame received for 3\nI0327 00:27:54.442529 2345 log.go:172] (0xc000603540) (3) Data frame handling\nI0327 00:27:54.442619 2345 log.go:172] (0xc000603540) (3) Data frame sent\nI0327 00:27:54.442704 2345 log.go:172] (0xc0000e0bb0) Data frame received for 3\nI0327 00:27:54.442761 2345 log.go:172] (0xc0000e0bb0) Data frame received for 5\nI0327 00:27:54.442799 2345 log.go:172] (0xc00049a960) (5) Data frame handling\nI0327 00:27:54.442834 2345 log.go:172] (0xc000603540) (3) Data frame handling\nI0327 00:27:54.445326 2345 log.go:172] (0xc0000e0bb0) Data frame received for 1\nI0327 00:27:54.445363 2345 log.go:172] (0xc0006db540) (1) Data frame handling\nI0327 00:27:54.445376 2345 log.go:172] (0xc0006db540) (1) Data frame sent\nI0327 00:27:54.445406 2345 log.go:172] (0xc0000e0bb0) (0xc0006db540) Stream removed, broadcasting: 1\nI0327 00:27:54.445457 2345 log.go:172] (0xc0000e0bb0) Go away received\nI0327 00:27:54.446055 2345 log.go:172] (0xc0000e0bb0) (0xc0006db540) Stream removed, broadcasting: 1\nI0327 00:27:54.446080 2345 log.go:172] (0xc0000e0bb0) (0xc000603540) Stream removed, broadcasting: 3\nI0327 00:27:54.446093 2345 log.go:172] (0xc0000e0bb0) (0xc00049a960) Stream removed, broadcasting: 5\n" Mar 27 00:27:54.450: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 27 00:27:54.450: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 27 00:27:54.454: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 27 00:28:04.458: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 27 00:28:04.459: INFO: Waiting for statefulset status.replicas updated to 0 Mar 27 00:28:04.474: INFO: POD NODE PHASE GRACE CONDITIONS Mar 27 00:28:04.474: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC }] Mar 27 00:28:04.474: INFO: Mar 27 00:28:04.474: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 27 00:28:05.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99361131s Mar 27 00:28:06.559: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989535176s Mar 27 00:28:07.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.909001913s Mar 27 00:28:08.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.904656669s Mar 27 00:28:09.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.899498118s Mar 27 00:28:10.594: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.894192229s Mar 27 00:28:11.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.874002101s Mar 27 00:28:12.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.868773458s Mar 27 00:28:13.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 861.939794ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2871 Mar 27 00:28:14.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2871 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 27 00:28:14.844: INFO: stderr: "I0327 00:28:14.746533 2368 log.go:172] (0xc000b63130) (0xc000bfe640) Create stream\nI0327 00:28:14.746589 2368 log.go:172] (0xc000b63130) (0xc000bfe640) Stream added, broadcasting: 1\nI0327 00:28:14.750904 2368 log.go:172] (0xc000b63130) Reply frame received for 1\nI0327 00:28:14.750955 2368 log.go:172] (0xc000b63130) (0xc00067b720) Create stream\nI0327 00:28:14.750973 2368 log.go:172] (0xc000b63130) (0xc00067b720) Stream added, broadcasting: 3\nI0327 00:28:14.751890 2368 log.go:172] (0xc000b63130) Reply frame received for 3\nI0327 00:28:14.751915 2368 log.go:172] (0xc000b63130) (0xc000542b40) Create stream\nI0327 00:28:14.751923 2368 log.go:172] (0xc000b63130) (0xc000542b40) Stream added, broadcasting: 5\nI0327 00:28:14.752855 2368 log.go:172] (0xc000b63130) Reply frame received for 5\nI0327 00:28:14.836778 2368 log.go:172] (0xc000b63130) Data frame received for 3\nI0327 00:28:14.836823 2368 log.go:172] (0xc00067b720) (3) Data frame handling\nI0327 00:28:14.836838 2368 log.go:172] (0xc00067b720) (3) Data frame sent\nI0327 00:28:14.836847 2368 log.go:172] (0xc000b63130) Data frame received for 3\nI0327 00:28:14.836854 2368 log.go:172] (0xc00067b720) (3) Data frame handling\nI0327 00:28:14.836887 2368 log.go:172] (0xc000b63130) Data frame received for 5\nI0327 00:28:14.836895 2368 log.go:172] (0xc000542b40) (5) Data frame handling\nI0327 00:28:14.836907 2368 log.go:172] (0xc000542b40) (5) Data frame sent\nI0327 00:28:14.836914 2368 log.go:172] (0xc000b63130) Data frame received for 5\nI0327 00:28:14.836921 2368 log.go:172] (0xc000542b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0327 00:28:14.838891 2368 log.go:172] (0xc000b63130) Data frame received for 1\nI0327 00:28:14.838941 2368 log.go:172] (0xc000bfe640) (1) Data frame handling\nI0327 00:28:14.838980 2368 log.go:172] (0xc000bfe640) (1) Data frame sent\nI0327 00:28:14.839090 2368 log.go:172] (0xc000b63130) (0xc000bfe640) Stream removed, broadcasting: 1\nI0327 00:28:14.839145 2368 log.go:172] (0xc000b63130) Go away received\nI0327 00:28:14.839563 2368 log.go:172] (0xc000b63130) (0xc000bfe640) Stream removed, broadcasting: 1\nI0327 00:28:14.839587 2368 log.go:172] (0xc000b63130) (0xc00067b720) Stream removed, broadcasting: 3\nI0327 00:28:14.839598 2368 log.go:172] (0xc000b63130) (0xc000542b40) Stream removed, broadcasting: 5\n" Mar 27 00:28:14.844: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 27 00:28:14.844: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 27 00:28:14.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2871 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 27 00:28:15.047: INFO: stderr: "I0327 00:28:14.975979 2389 log.go:172] (0xc00098e160) (0xc00068f680) Create stream\nI0327 00:28:14.976044 2389 log.go:172] (0xc00098e160) (0xc00068f680) Stream added, broadcasting: 1\nI0327 00:28:14.978774 2389 log.go:172] (0xc00098e160) Reply frame received for 1\nI0327 00:28:14.978805 2389 log.go:172] (0xc00098e160) (0xc00098c000) Create stream\nI0327 00:28:14.978823 2389 log.go:172] (0xc00098e160) (0xc00098c000) Stream added, broadcasting: 3\nI0327 00:28:14.979839 2389 log.go:172] (0xc00098e160) Reply frame received for 3\nI0327 00:28:14.979886 2389 log.go:172] (0xc00098e160) (0xc00098c0a0) Create stream\nI0327 00:28:14.979900 2389 log.go:172] (0xc00098e160) (0xc00098c0a0) Stream added, broadcasting: 5\nI0327 00:28:14.980706 2389 log.go:172] (0xc00098e160) Reply frame received for 5\nI0327 00:28:15.041609 2389 log.go:172] (0xc00098e160) Data frame received for 5\nI0327 00:28:15.041645 2389 log.go:172] (0xc00098c0a0) (5) Data frame handling\nI0327 00:28:15.041659 2389 log.go:172] (0xc00098c0a0) (5) Data frame sent\nI0327 00:28:15.041669 2389 log.go:172] (0xc00098e160) Data frame received for 5\nI0327 00:28:15.041677 2389 log.go:172] (0xc00098c0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0327 00:28:15.041718 2389 log.go:172] (0xc00098e160) Data frame received for 3\nI0327 00:28:15.041732 2389 log.go:172] (0xc00098c000) (3) Data frame handling\nI0327 00:28:15.041747 2389 log.go:172] (0xc00098c000) (3) Data frame sent\nI0327 00:28:15.041758 2389 log.go:172] (0xc00098e160) Data frame received for 3\nI0327 00:28:15.041766 2389 log.go:172] (0xc00098c000) (3) Data frame handling\nI0327 00:28:15.043044 2389 log.go:172] (0xc00098e160) Data frame received for 1\nI0327 00:28:15.043081 2389 log.go:172] (0xc00068f680) (1) Data frame handling\nI0327 00:28:15.043103 2389 log.go:172] (0xc00068f680) (1) Data frame sent\nI0327 00:28:15.043120 2389 log.go:172] (0xc00098e160) (0xc00068f680) Stream removed, broadcasting: 1\nI0327 00:28:15.043139 2389 log.go:172] (0xc00098e160) Go away received\nI0327 00:28:15.043534 2389 log.go:172] (0xc00098e160) (0xc00068f680) Stream removed, broadcasting: 1\nI0327 00:28:15.043551 2389 log.go:172] (0xc00098e160) (0xc00098c000) Stream removed, broadcasting: 3\nI0327 00:28:15.043560 2389 log.go:172] (0xc00098e160) (0xc00098c0a0) Stream removed, broadcasting: 5\n" Mar 27 00:28:15.047: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 27 00:28:15.047: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 27 00:28:15.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2871 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 27 00:28:15.260: INFO: stderr: "I0327 00:28:15.187404 2409 log.go:172] (0xc00003a420) (0xc0006d12c0) Create stream\nI0327 00:28:15.187449 2409 log.go:172] (0xc00003a420) (0xc0006d12c0) Stream added, broadcasting: 1\nI0327 00:28:15.190179 2409 log.go:172] (0xc00003a420) Reply frame received for 1\nI0327 00:28:15.190221 2409 log.go:172] (0xc00003a420) (0xc000454b40) Create stream\nI0327 00:28:15.190234 2409 log.go:172] (0xc00003a420) (0xc000454b40) Stream added, broadcasting: 3\nI0327 00:28:15.191295 2409 log.go:172] (0xc00003a420) Reply frame received for 3\nI0327 00:28:15.191346 2409 log.go:172] (0xc00003a420) (0xc000a96000) Create stream\nI0327 00:28:15.191359 2409 log.go:172] (0xc00003a420) (0xc000a96000) Stream added, broadcasting: 5\nI0327 00:28:15.192261 2409 log.go:172] (0xc00003a420) Reply frame received for 5\nI0327 00:28:15.254820 2409 log.go:172] (0xc00003a420) Data frame received for 5\nI0327 00:28:15.254848 2409 log.go:172] (0xc000a96000) (5) Data frame handling\nI0327 00:28:15.254857 2409 log.go:172] (0xc000a96000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0327 00:28:15.254868 2409 log.go:172] (0xc00003a420) Data frame received for 3\nI0327 00:28:15.254873 2409 log.go:172] (0xc000454b40) (3) Data frame handling\nI0327 00:28:15.254884 2409 log.go:172] (0xc000454b40) (3) Data frame sent\nI0327 00:28:15.254893 2409 log.go:172] (0xc00003a420) Data frame received for 3\nI0327 00:28:15.254900 2409 log.go:172] (0xc000454b40) (3) Data frame handling\nI0327 00:28:15.254933 2409 log.go:172] (0xc00003a420) Data frame received for 5\nI0327 00:28:15.254948 2409 log.go:172] (0xc000a96000) (5) Data frame handling\nI0327 00:28:15.256393 2409 log.go:172] (0xc00003a420) Data frame received for 1\nI0327 00:28:15.256411 2409 log.go:172] (0xc0006d12c0) (1) Data frame handling\nI0327 00:28:15.256429 2409 log.go:172] (0xc0006d12c0) (1) Data frame sent\nI0327 00:28:15.256444 2409 log.go:172] (0xc00003a420) (0xc0006d12c0) Stream removed, broadcasting: 1\nI0327 00:28:15.256458 2409 log.go:172] (0xc00003a420) Go away received\nI0327 00:28:15.256823 2409 log.go:172] (0xc00003a420) (0xc0006d12c0) Stream removed, broadcasting: 1\nI0327 00:28:15.256839 2409 log.go:172] (0xc00003a420) (0xc000454b40) Stream removed, broadcasting: 3\nI0327 00:28:15.256848 2409 log.go:172] (0xc00003a420) (0xc000a96000) Stream removed, broadcasting: 5\n" Mar 27 00:28:15.260: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 27 00:28:15.260: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 27 00:28:15.263: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 27 00:28:15.263: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 27 00:28:15.264: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 27 00:28:15.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2871 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 27 00:28:15.470: INFO: stderr: "I0327 00:28:15.395211 2430 log.go:172] (0xc00003a6e0) (0xc00079b220) Create stream\nI0327 00:28:15.395282 2430 log.go:172] (0xc00003a6e0) (0xc00079b220) Stream added, broadcasting: 1\nI0327 00:28:15.398082 2430 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0327 00:28:15.398128 2430 log.go:172] (0xc00003a6e0) (0xc00079b400) Create stream\nI0327 00:28:15.398140 2430 log.go:172] (0xc00003a6e0) (0xc00079b400) Stream added, broadcasting: 3\nI0327 00:28:15.399145 2430 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0327 00:28:15.399186 2430 log.go:172] (0xc00003a6e0) (0xc00079b4a0) Create stream\nI0327 00:28:15.399200 2430 log.go:172] (0xc00003a6e0) (0xc00079b4a0) Stream added, broadcasting: 5\nI0327 00:28:15.400134 2430 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0327 00:28:15.463948 2430 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0327 00:28:15.464000 2430 log.go:172] (0xc00079b4a0) (5) Data frame handling\nI0327 00:28:15.464019 2430 log.go:172] (0xc00079b4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0327 00:28:15.464039 2430 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0327 00:28:15.464050 2430 log.go:172] (0xc00079b400) (3) Data frame handling\nI0327 00:28:15.464070 2430 log.go:172] (0xc00079b400) (3) Data frame sent\nI0327 00:28:15.464083 2430 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0327 00:28:15.464094 2430 log.go:172] (0xc00079b400) (3) Data frame handling\nI0327 00:28:15.464148 2430 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0327 00:28:15.464187 2430 log.go:172] (0xc00079b4a0) (5) Data frame handling\nI0327 00:28:15.465620 2430 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0327 00:28:15.465642 2430 log.go:172] (0xc00079b220) (1) Data frame handling\nI0327 00:28:15.465655 2430 log.go:172] (0xc00079b220) (1) Data frame sent\nI0327 00:28:15.465669 2430 log.go:172] (0xc00003a6e0) (0xc00079b220) Stream removed, broadcasting: 1\nI0327 00:28:15.465689 2430 log.go:172] (0xc00003a6e0) Go away received\nI0327 00:28:15.466066 2430 log.go:172] (0xc00003a6e0) (0xc00079b220) Stream removed, broadcasting: 1\nI0327 00:28:15.466089 2430 log.go:172] (0xc00003a6e0) (0xc00079b400) Stream removed, broadcasting: 3\nI0327 00:28:15.466101 2430 log.go:172] (0xc00003a6e0) (0xc00079b4a0) Stream removed, broadcasting: 5\n" Mar 27 00:28:15.470: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 27 00:28:15.470: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 27 00:28:15.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2871 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 27 00:28:15.714: INFO: stderr: "I0327 00:28:15.596409 2453 log.go:172] (0xc0009826e0) (0xc00052f2c0) Create stream\nI0327 00:28:15.596468 2453 log.go:172] (0xc0009826e0) (0xc00052f2c0) Stream added, broadcasting: 1\nI0327 00:28:15.599468 2453 log.go:172] (0xc0009826e0) Reply frame received for 1\nI0327 00:28:15.599509 2453 log.go:172] (0xc0009826e0) (0xc00052f360) Create stream\nI0327 00:28:15.599519 2453 log.go:172] (0xc0009826e0) (0xc00052f360) Stream added, broadcasting: 3\nI0327 00:28:15.600575 2453 log.go:172] (0xc0009826e0) Reply frame received for 3\nI0327 00:28:15.600625 2453 log.go:172] (0xc0009826e0) (0xc000990000) Create stream\nI0327 00:28:15.600644 2453 log.go:172] (0xc0009826e0) (0xc000990000) Stream added, broadcasting: 5\nI0327 00:28:15.601896 2453 log.go:172] (0xc0009826e0) Reply frame received for 5\nI0327 00:28:15.670894 2453 log.go:172] (0xc0009826e0) Data frame received for 5\nI0327 00:28:15.670917 2453 log.go:172] (0xc000990000) (5) Data frame handling\nI0327 00:28:15.670931 2453 log.go:172] (0xc000990000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0327 00:28:15.707250 2453 log.go:172] (0xc0009826e0) Data frame received for 3\nI0327 00:28:15.707299 2453 log.go:172] (0xc00052f360) (3) Data frame handling\nI0327 00:28:15.707329 2453 log.go:172] (0xc00052f360) (3) Data frame sent\nI0327 00:28:15.707824 2453 log.go:172] (0xc0009826e0) Data frame received for 5\nI0327 00:28:15.707850 2453 log.go:172] (0xc000990000) (5) Data frame handling\nI0327 00:28:15.707866 2453 log.go:172] (0xc0009826e0) Data frame received for 3\nI0327 00:28:15.707873 2453 log.go:172] (0xc00052f360) (3) Data frame handling\nI0327 00:28:15.709545 2453 log.go:172] (0xc0009826e0) Data frame received for 1\nI0327 00:28:15.709590 2453 log.go:172] (0xc00052f2c0) (1) Data frame handling\nI0327 00:28:15.709614 2453 log.go:172] (0xc00052f2c0) (1) Data frame sent\nI0327 00:28:15.709644 2453 log.go:172] (0xc0009826e0) (0xc00052f2c0) Stream removed, broadcasting: 1\nI0327 00:28:15.709672 2453 log.go:172] (0xc0009826e0) Go away received\nI0327 00:28:15.710182 2453 log.go:172] (0xc0009826e0) (0xc00052f2c0) Stream removed, broadcasting: 1\nI0327 00:28:15.710210 2453 log.go:172] (0xc0009826e0) (0xc00052f360) Stream removed, broadcasting: 3\nI0327 00:28:15.710227 2453 log.go:172] (0xc0009826e0) (0xc000990000) Stream removed, broadcasting: 5\n" Mar 27 00:28:15.714: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 27 00:28:15.714: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 27 00:28:15.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2871 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 27 00:28:15.948: INFO: stderr: "I0327 00:28:15.862029 2475 log.go:172] (0xc0009c0000) (0xc0007dd360) Create stream\nI0327 00:28:15.862082 2475 log.go:172] (0xc0009c0000) (0xc0007dd360) Stream added, broadcasting: 1\nI0327 00:28:15.864673 2475 log.go:172] (0xc0009c0000) Reply frame received for 1\nI0327 00:28:15.864722 2475 log.go:172] (0xc0009c0000) (0xc0009ae000) Create stream\nI0327 00:28:15.864743 2475 log.go:172] (0xc0009c0000) (0xc0009ae000) Stream added, broadcasting: 3\nI0327 00:28:15.866064 2475 log.go:172] (0xc0009c0000) Reply frame received for 3\nI0327 00:28:15.866106 2475 log.go:172] (0xc0009c0000) (0xc0007dd540) Create stream\nI0327 00:28:15.866121 2475 log.go:172] (0xc0009c0000) (0xc0007dd540) Stream added, broadcasting: 5\nI0327 00:28:15.867185 2475 log.go:172] (0xc0009c0000) Reply frame received for 5\nI0327 00:28:15.914292 2475 log.go:172] (0xc0009c0000) Data frame received for 5\nI0327 00:28:15.914325 2475 log.go:172] (0xc0007dd540) (5) Data frame handling\nI0327 00:28:15.914343 2475 log.go:172] (0xc0007dd540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0327 00:28:15.941258 2475 log.go:172] (0xc0009c0000) Data frame received for 3\nI0327 00:28:15.941286 2475 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0327 00:28:15.941299 2475 log.go:172] (0xc0009ae000) (3) Data frame sent\nI0327 00:28:15.941305 2475 log.go:172] (0xc0009c0000) Data frame received for 3\nI0327 00:28:15.941311 2475 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0327 00:28:15.941421 2475 log.go:172] (0xc0009c0000) Data frame received for 5\nI0327 00:28:15.941435 2475 log.go:172] (0xc0007dd540) (5) Data frame handling\nI0327 00:28:15.943330 2475 log.go:172] (0xc0009c0000) Data frame received for 1\nI0327 00:28:15.943370 2475 log.go:172] (0xc0007dd360) (1) Data frame handling\nI0327 00:28:15.943406 2475 log.go:172] (0xc0007dd360) (1) Data frame sent\nI0327 00:28:15.943432 2475 log.go:172] (0xc0009c0000) (0xc0007dd360) Stream removed, broadcasting: 1\nI0327 00:28:15.943475 2475 log.go:172] (0xc0009c0000) Go away received\nI0327 00:28:15.944141 2475 log.go:172] (0xc0009c0000) (0xc0007dd360) Stream removed, broadcasting: 1\nI0327 00:28:15.944180 2475 log.go:172] (0xc0009c0000) (0xc0009ae000) Stream removed, broadcasting: 3\nI0327 00:28:15.944201 2475 log.go:172] (0xc0009c0000) (0xc0007dd540) Stream removed, broadcasting: 5\n" Mar 27 00:28:15.948: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 27 00:28:15.948: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 27 00:28:15.948: INFO: Waiting for statefulset status.replicas updated to 0 Mar 27 00:28:15.951: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 27 00:28:25.959: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 27 00:28:25.959: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 27 00:28:25.959: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 27 00:28:25.988: INFO: POD NODE PHASE GRACE CONDITIONS Mar 27 00:28:25.988: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC }] Mar 27 00:28:25.988: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC }] Mar 27 00:28:25.988: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC }] Mar 27 00:28:25.988: INFO: Mar 27 00:28:25.988: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 27 00:28:26.996: INFO: POD NODE PHASE GRACE CONDITIONS Mar 27 00:28:26.996: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC }] Mar 27 00:28:26.996: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC }] Mar 27 00:28:26.996: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC }] Mar 27 00:28:26.996: INFO: Mar 27 00:28:26.996: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 27 00:28:28.001: INFO: POD NODE PHASE GRACE CONDITIONS Mar 27 00:28:28.001: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC }] Mar 27 00:28:28.001: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC }] Mar 27 00:28:28.001: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC }] Mar 27 00:28:28.001: INFO: Mar 27 00:28:28.001: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 27 00:28:29.007: INFO: POD NODE PHASE GRACE CONDITIONS Mar 27 00:28:29.007: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC }] Mar 27 00:28:29.007: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC }] Mar 27 00:28:29.007: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:04 +0000 UTC }] Mar 27 00:28:29.007: INFO: Mar 27 00:28:29.007: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 27 00:28:30.011: INFO: POD NODE PHASE GRACE CONDITIONS Mar 27 00:28:30.011: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC }] Mar 27 00:28:30.011: INFO: Mar 27 00:28:30.011: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 27 00:28:31.025: INFO: POD NODE PHASE GRACE CONDITIONS Mar 27 00:28:31.025: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC }] Mar 27 00:28:31.025: INFO: Mar 27 00:28:31.025: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 27 00:28:32.030: INFO: POD NODE PHASE GRACE CONDITIONS Mar 27 00:28:32.030: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:28:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 00:27:44 +0000 UTC }] Mar 27 00:28:32.030: INFO: Mar 27 00:28:32.030: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 27 00:28:33.053: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.935943175s Mar 27 00:28:34.058: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.912386202s Mar 27 00:28:35.060: INFO: Verifying statefulset ss doesn't scale past 0 for another 908.376734ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2871 Mar 27 00:28:36.065: INFO: Scaling statefulset ss to 0 Mar 27 00:28:36.072: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 27 00:28:36.074: INFO: Deleting all statefulset in ns statefulset-2871 Mar 27 00:28:36.077: INFO: Scaling statefulset ss to 0 Mar 27 00:28:36.086: INFO: Waiting for statefulset status.replicas updated to 0 Mar 27 00:28:36.088: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:28:36.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2871" for this suite. • [SLOW TEST:52.067 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":180,"skipped":3159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:28:36.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:28:36.184: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9a0dcd3-82ae-40cb-9ad5-a63f1b3c476c" in namespace "projected-6661" to be "Succeeded or Failed" Mar 27 00:28:36.187: INFO: Pod "downwardapi-volume-e9a0dcd3-82ae-40cb-9ad5-a63f1b3c476c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.217924ms Mar 27 00:28:38.192: INFO: Pod "downwardapi-volume-e9a0dcd3-82ae-40cb-9ad5-a63f1b3c476c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007679555s Mar 27 00:28:40.196: INFO: Pod "downwardapi-volume-e9a0dcd3-82ae-40cb-9ad5-a63f1b3c476c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012057884s STEP: Saw pod success Mar 27 00:28:40.196: INFO: Pod "downwardapi-volume-e9a0dcd3-82ae-40cb-9ad5-a63f1b3c476c" satisfied condition "Succeeded or Failed" Mar 27 00:28:40.200: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e9a0dcd3-82ae-40cb-9ad5-a63f1b3c476c container client-container: STEP: delete the pod Mar 27 00:28:40.272: INFO: Waiting for pod downwardapi-volume-e9a0dcd3-82ae-40cb-9ad5-a63f1b3c476c to disappear Mar 27 00:28:40.280: INFO: Pod downwardapi-volume-e9a0dcd3-82ae-40cb-9ad5-a63f1b3c476c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:28:40.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6661" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3188,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:28:40.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:28:40.348: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd24e9d7-2c09-4e33-8b6c-a283c3a84d9c" in namespace "downward-api-2779" to be "Succeeded or Failed" Mar 27 00:28:40.357: INFO: Pod "downwardapi-volume-dd24e9d7-2c09-4e33-8b6c-a283c3a84d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.415815ms Mar 27 00:28:42.370: INFO: Pod "downwardapi-volume-dd24e9d7-2c09-4e33-8b6c-a283c3a84d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021946954s Mar 27 00:28:44.374: INFO: Pod "downwardapi-volume-dd24e9d7-2c09-4e33-8b6c-a283c3a84d9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026415057s STEP: Saw pod success Mar 27 00:28:44.374: INFO: Pod "downwardapi-volume-dd24e9d7-2c09-4e33-8b6c-a283c3a84d9c" satisfied condition "Succeeded or Failed" Mar 27 00:28:44.378: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-dd24e9d7-2c09-4e33-8b6c-a283c3a84d9c container client-container: STEP: delete the pod Mar 27 00:28:44.421: INFO: Waiting for pod downwardapi-volume-dd24e9d7-2c09-4e33-8b6c-a283c3a84d9c to disappear Mar 27 00:28:44.424: INFO: Pod downwardapi-volume-dd24e9d7-2c09-4e33-8b6c-a283c3a84d9c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:28:44.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2779" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:28:44.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:28:44.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1110" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":183,"skipped":3277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:28:44.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 27 00:28:44.544: INFO: >>> kubeConfig: /root/.kube/config Mar 27 00:28:46.457: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:28:56.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3225" for this suite. • [SLOW TEST:12.512 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":184,"skipped":3316,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:28:57.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 27 00:28:57.070: INFO: Waiting up to 5m0s for pod "downward-api-c44f1961-7061-43de-9806-56f7d2f823d2" in namespace "downward-api-9346" to be "Succeeded or Failed" Mar 27 00:28:57.074: INFO: Pod "downward-api-c44f1961-7061-43de-9806-56f7d2f823d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417629ms Mar 27 00:28:59.078: INFO: Pod "downward-api-c44f1961-7061-43de-9806-56f7d2f823d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008899185s Mar 27 00:29:01.083: INFO: Pod "downward-api-c44f1961-7061-43de-9806-56f7d2f823d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013193765s STEP: Saw pod success Mar 27 00:29:01.083: INFO: Pod "downward-api-c44f1961-7061-43de-9806-56f7d2f823d2" satisfied condition "Succeeded or Failed" Mar 27 00:29:01.086: INFO: Trying to get logs from node latest-worker2 pod downward-api-c44f1961-7061-43de-9806-56f7d2f823d2 container dapi-container: STEP: delete the pod Mar 27 00:29:01.132: INFO: Waiting for pod downward-api-c44f1961-7061-43de-9806-56f7d2f823d2 to disappear Mar 27 00:29:01.150: INFO: Pod downward-api-c44f1961-7061-43de-9806-56f7d2f823d2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:29:01.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9346" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3327,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:29:01.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6875 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6875 STEP: Creating statefulset with conflicting port in namespace statefulset-6875 STEP: Waiting until pod test-pod will start running in namespace statefulset-6875 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6875 Mar 27 00:29:05.301: INFO: Observed stateful pod in namespace: statefulset-6875, name: ss-0, uid: 41006b80-3bbf-4bb7-b584-fbed04737471, status phase: Pending. Waiting for statefulset controller to delete. Mar 27 00:29:05.520: INFO: Observed stateful pod in namespace: statefulset-6875, name: ss-0, uid: 41006b80-3bbf-4bb7-b584-fbed04737471, status phase: Failed. Waiting for statefulset controller to delete. Mar 27 00:29:05.584: INFO: Observed stateful pod in namespace: statefulset-6875, name: ss-0, uid: 41006b80-3bbf-4bb7-b584-fbed04737471, status phase: Failed. Waiting for statefulset controller to delete. Mar 27 00:29:05.639: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6875 STEP: Removing pod with conflicting port in namespace statefulset-6875 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6875 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 27 00:29:09.769: INFO: Deleting all statefulset in ns statefulset-6875 Mar 27 00:29:09.771: INFO: Scaling statefulset ss to 0 Mar 27 00:29:19.808: INFO: Waiting for statefulset status.replicas updated to 0 Mar 27 00:29:19.811: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:29:19.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6875" for this suite. • [SLOW TEST:18.668 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":186,"skipped":3334,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:29:19.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 27 00:29:19.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eae4a533-27d1-4ad9-a983-42e872dc34c8" in namespace "projected-527" to be "Succeeded or Failed" Mar 27 00:29:19.910: INFO: Pod "downwardapi-volume-eae4a533-27d1-4ad9-a983-42e872dc34c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010308ms Mar 27 00:29:21.915: INFO: Pod "downwardapi-volume-eae4a533-27d1-4ad9-a983-42e872dc34c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012852562s Mar 27 00:29:23.919: INFO: Pod "downwardapi-volume-eae4a533-27d1-4ad9-a983-42e872dc34c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017337451s STEP: Saw pod success Mar 27 00:29:23.920: INFO: Pod "downwardapi-volume-eae4a533-27d1-4ad9-a983-42e872dc34c8" satisfied condition "Succeeded or Failed" Mar 27 00:29:23.923: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-eae4a533-27d1-4ad9-a983-42e872dc34c8 container client-container: STEP: delete the pod Mar 27 00:29:23.955: INFO: Waiting for pod downwardapi-volume-eae4a533-27d1-4ad9-a983-42e872dc34c8 to disappear Mar 27 00:29:23.969: INFO: Pod downwardapi-volume-eae4a533-27d1-4ad9-a983-42e872dc34c8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:29:23.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-527" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3336,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:29:23.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 27 00:29:24.057: INFO: Waiting up to 5m0s for pod "pod-a928f99e-ba22-4aa1-95e8-cdd570c6df5a" in namespace "emptydir-1757" to be "Succeeded or Failed" Mar 27 00:29:24.060: INFO: Pod "pod-a928f99e-ba22-4aa1-95e8-cdd570c6df5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.569015ms Mar 27 00:29:26.063: INFO: Pod "pod-a928f99e-ba22-4aa1-95e8-cdd570c6df5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006090869s Mar 27 00:29:28.087: INFO: Pod "pod-a928f99e-ba22-4aa1-95e8-cdd570c6df5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029568464s STEP: Saw pod success Mar 27 00:29:28.087: INFO: Pod "pod-a928f99e-ba22-4aa1-95e8-cdd570c6df5a" satisfied condition "Succeeded or Failed" Mar 27 00:29:28.109: INFO: Trying to get logs from node latest-worker2 pod pod-a928f99e-ba22-4aa1-95e8-cdd570c6df5a container test-container: STEP: delete the pod Mar 27 00:29:28.127: INFO: Waiting for pod pod-a928f99e-ba22-4aa1-95e8-cdd570c6df5a to disappear Mar 27 00:29:28.131: INFO: Pod pod-a928f99e-ba22-4aa1-95e8-cdd570c6df5a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:29:28.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1757" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3340,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:29:28.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-4e917155-5240-425d-86ef-3fefee430c19 STEP: Creating a pod to test consume configMaps Mar 27 00:29:28.205: INFO: Waiting up to 5m0s for pod "pod-configmaps-d1b490af-1e45-4721-8aaf-85b80c21680d" in namespace "configmap-8919" to be "Succeeded or Failed" Mar 27 00:29:28.241: INFO: Pod "pod-configmaps-d1b490af-1e45-4721-8aaf-85b80c21680d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.374456ms Mar 27 00:29:30.245: INFO: Pod "pod-configmaps-d1b490af-1e45-4721-8aaf-85b80c21680d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039591255s Mar 27 00:29:32.249: INFO: Pod "pod-configmaps-d1b490af-1e45-4721-8aaf-85b80c21680d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043656713s STEP: Saw pod success Mar 27 00:29:32.249: INFO: Pod "pod-configmaps-d1b490af-1e45-4721-8aaf-85b80c21680d" satisfied condition "Succeeded or Failed" Mar 27 00:29:32.252: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d1b490af-1e45-4721-8aaf-85b80c21680d container configmap-volume-test: STEP: delete the pod Mar 27 00:29:32.285: INFO: Waiting for pod pod-configmaps-d1b490af-1e45-4721-8aaf-85b80c21680d to disappear Mar 27 00:29:32.303: INFO: Pod pod-configmaps-d1b490af-1e45-4721-8aaf-85b80c21680d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:29:32.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8919" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3341,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:29:32.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:29:36.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1255" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:29:36.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Mar 27 00:29:36.519: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Mar 27 00:29:36.524: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 27 00:29:36.524: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Mar 27 00:29:36.541: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 27 00:29:36.541: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Mar 27 00:29:36.561: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Mar 27 00:29:36.562: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Mar 27 00:29:43.767: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:29:43.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8835" for this suite. • [SLOW TEST:7.386 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":191,"skipped":3376,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:29:43.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:29:43.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9575' Mar 27 00:29:44.195: INFO: stderr: "" Mar 27 00:29:44.195: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 27 00:29:44.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9575' Mar 27 00:29:44.469: INFO: stderr: "" Mar 27 00:29:44.469: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 27 00:29:45.505: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:29:45.505: INFO: Found 0 / 1 Mar 27 00:29:46.490: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:29:46.490: INFO: Found 0 / 1 Mar 27 00:29:47.495: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:29:47.495: INFO: Found 1 / 1 Mar 27 00:29:47.495: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 27 00:29:47.513: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:29:47.513: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 27 00:29:47.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-v2cjb --namespace=kubectl-9575' Mar 27 00:29:47.630: INFO: stderr: "" Mar 27 00:29:47.630: INFO: stdout: "Name: agnhost-master-v2cjb\nNamespace: kubectl-9575\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Fri, 27 Mar 2020 00:29:44 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.167\nIPs:\n IP: 10.244.2.167\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://15f81decdc578db3e5571bb9015f8b7c1e72b43ddd58b917255a4cdf9cebe64b\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 27 Mar 2020 00:29:46 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-r7rzn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-r7rzn:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-r7rzn\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-9575/agnhost-master-v2cjb to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" Mar 27 00:29:47.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9575' Mar 27 00:29:47.765: INFO: stderr: "" Mar 27 00:29:47.765: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9575\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-v2cjb\n" Mar 27 00:29:47.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9575' Mar 27 00:29:47.866: INFO: stderr: "" Mar 27 00:29:47.866: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9575\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.96.244\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.167:6379\nSession Affinity: None\nEvents: \n" Mar 27 00:29:47.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Mar 27 00:29:47.982: INFO: stderr: "" Mar 27 00:29:47.982: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Fri, 27 Mar 2020 00:29:39 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 27 Mar 2020 00:25:10 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 27 Mar 2020 00:25:10 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 27 Mar 2020 00:25:10 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 27 Mar 2020 00:25:10 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 11d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 11d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 11d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 11d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 11d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 11d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Mar 27 00:29:47.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-9575' Mar 27 00:29:48.084: INFO: stderr: "" Mar 27 00:29:48.084: INFO: stdout: "Name: kubectl-9575\nLabels: e2e-framework=kubectl\n e2e-run=3daa3541-faa3-4693-9570-7009814c3d0d\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:29:48.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9575" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":192,"skipped":3379,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:29:48.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:29:48.182: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 27 00:29:48.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:48.219: INFO: Number of nodes with available pods: 0 Mar 27 00:29:48.219: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:29:49.272: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:49.280: INFO: Number of nodes with available pods: 0 Mar 27 00:29:49.280: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:29:50.224: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:50.227: INFO: Number of nodes with available pods: 0 Mar 27 00:29:50.227: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:29:51.254: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:51.260: INFO: Number of nodes with available pods: 0 Mar 27 00:29:51.260: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:29:52.224: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:52.228: INFO: Number of nodes with available pods: 0 Mar 27 00:29:52.228: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:29:53.228: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:53.231: INFO: Number of nodes with available pods: 2 Mar 27 00:29:53.231: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 27 00:29:53.267: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:53.267: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:53.300: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:54.303: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:54.303: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:54.307: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:55.304: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:55.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:55.392: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:56.303: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:56.303: INFO: Pod daemon-set-4v4tn is not available Mar 27 00:29:56.303: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:56.307: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:57.304: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:57.304: INFO: Pod daemon-set-4v4tn is not available Mar 27 00:29:57.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:57.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:58.305: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:58.305: INFO: Pod daemon-set-4v4tn is not available Mar 27 00:29:58.305: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:58.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:29:59.308: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:59.308: INFO: Pod daemon-set-4v4tn is not available Mar 27 00:29:59.308: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:29:59.311: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:00.304: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:00.304: INFO: Pod daemon-set-4v4tn is not available Mar 27 00:30:00.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:00.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:01.308: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:01.308: INFO: Pod daemon-set-4v4tn is not available Mar 27 00:30:01.308: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:01.312: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:02.304: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:02.304: INFO: Pod daemon-set-4v4tn is not available Mar 27 00:30:02.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:02.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:03.305: INFO: Wrong image for pod: daemon-set-4v4tn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:03.305: INFO: Pod daemon-set-4v4tn is not available Mar 27 00:30:03.305: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:03.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:04.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:04.304: INFO: Pod daemon-set-v4r8v is not available Mar 27 00:30:04.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:05.449: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:05.449: INFO: Pod daemon-set-v4r8v is not available Mar 27 00:30:05.456: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:06.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:06.304: INFO: Pod daemon-set-v4r8v is not available Mar 27 00:30:06.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:07.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:07.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:08.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:08.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:09.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:09.304: INFO: Pod daemon-set-jqhqn is not available Mar 27 00:30:09.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:10.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:10.304: INFO: Pod daemon-set-jqhqn is not available Mar 27 00:30:10.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:11.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:11.305: INFO: Pod daemon-set-jqhqn is not available Mar 27 00:30:11.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:12.304: INFO: Wrong image for pod: daemon-set-jqhqn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 27 00:30:12.305: INFO: Pod daemon-set-jqhqn is not available Mar 27 00:30:12.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:13.304: INFO: Pod daemon-set-hjcpv is not available Mar 27 00:30:13.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 27 00:30:13.313: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:13.316: INFO: Number of nodes with available pods: 1 Mar 27 00:30:13.316: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:30:14.360: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:14.364: INFO: Number of nodes with available pods: 1 Mar 27 00:30:14.364: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:30:15.321: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:15.324: INFO: Number of nodes with available pods: 1 Mar 27 00:30:15.324: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:30:16.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:30:16.324: INFO: Number of nodes with available pods: 2 Mar 27 00:30:16.324: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7534, will wait for the garbage collector to delete the pods Mar 27 00:30:16.399: INFO: Deleting DaemonSet.extensions daemon-set took: 6.67449ms Mar 27 00:30:16.499: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.255114ms Mar 27 00:30:23.103: INFO: Number of nodes with available pods: 0 Mar 27 00:30:23.103: INFO: Number of running nodes: 0, number of available pods: 0 Mar 27 00:30:23.106: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7534/daemonsets","resourceVersion":"3081913"},"items":null} Mar 27 00:30:23.109: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7534/pods","resourceVersion":"3081913"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:30:23.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7534" for this suite. • [SLOW TEST:35.036 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":193,"skipped":3382,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:30:23.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:30:23.204: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 27 00:30:28.211: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 27 00:30:28.211: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 27 00:30:28.271: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9664 /apis/apps/v1/namespaces/deployment-9664/deployments/test-cleanup-deployment 6d084dbb-6e2b-4f26-93c4-a6bb7e8c862d 3081954 1 2020-03-27 00:30:28 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005de45d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 27 00:30:28.344: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-9664 /apis/apps/v1/namespaces/deployment-9664/replicasets/test-cleanup-deployment-577c77b589 9dbb4e70-88e9-4a62-b9ad-7202cbb2b097 3081961 1 2020-03-27 00:30:28 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 6d084dbb-6e2b-4f26-93c4-a6bb7e8c862d 0xc005494ce7 0xc005494ce8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005494d58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 27 00:30:28.344: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 27 00:30:28.344: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9664 /apis/apps/v1/namespaces/deployment-9664/replicasets/test-cleanup-controller 747f1c9d-3e88-4c96-b3d5-6eb77411ce6c 3081956 1 2020-03-27 00:30:23 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 6d084dbb-6e2b-4f26-93c4-a6bb7e8c862d 0xc005494c17 0xc005494c18}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005494c78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 27 00:30:28.348: INFO: Pod "test-cleanup-controller-gtlnj" is available: &Pod{ObjectMeta:{test-cleanup-controller-gtlnj test-cleanup-controller- deployment-9664 /api/v1/namespaces/deployment-9664/pods/test-cleanup-controller-gtlnj 1aced304-db6b-4c0b-b989-e548710b375b 3081938 0 2020-03-27 00:30:23 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 747f1c9d-3e88-4c96-b3d5-6eb77411ce6c 0xc005b4a7c7 0xc005b4a7c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnmhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnmhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnmhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:30:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:30:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.43,StartTime:2020-03-27 00:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 00:30:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://11c4c4d56af98d8944e3bff0566c126d4bbd433741d538f201408938028b9127,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 27 00:30:28.348: INFO: Pod "test-cleanup-deployment-577c77b589-gsfmj" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-gsfmj test-cleanup-deployment-577c77b589- deployment-9664 /api/v1/namespaces/deployment-9664/pods/test-cleanup-deployment-577c77b589-gsfmj 2d227bd7-e9b1-45e1-b7dc-17263ee9451d 3081963 0 2020-03-27 00:30:28 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 9dbb4e70-88e9-4a62-b9ad-7202cbb2b097 0xc005b4a957 0xc005b4a958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnmhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnmhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnmhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 00:30:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:30:28.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9664" for this suite. • [SLOW TEST:5.256 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":194,"skipped":3387,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:30:28.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-182 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 27 00:30:28.516: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 27 00:30:28.614: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 27 00:30:30.733: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 27 00:30:32.618: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:30:34.618: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:30:36.617: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:30:38.618: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:30:40.626: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:30:42.618: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:30:44.618: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:30:46.618: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:30:48.618: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 27 00:30:48.623: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 27 00:30:52.646: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.171:8080/dial?request=hostname&protocol=http&host=10.244.2.170&port=8080&tries=1'] Namespace:pod-network-test-182 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:30:52.646: INFO: >>> kubeConfig: /root/.kube/config I0327 00:30:52.676970 7 log.go:172] (0xc002c0a210) (0xc001df9180) Create stream I0327 00:30:52.677020 7 log.go:172] (0xc002c0a210) (0xc001df9180) Stream added, broadcasting: 1 I0327 00:30:52.678773 7 log.go:172] (0xc002c0a210) Reply frame received for 1 I0327 00:30:52.678798 7 log.go:172] (0xc002c0a210) (0xc001968140) Create stream I0327 00:30:52.678807 7 log.go:172] (0xc002c0a210) (0xc001968140) Stream added, broadcasting: 3 I0327 00:30:52.679546 7 log.go:172] (0xc002c0a210) Reply frame received for 3 I0327 00:30:52.679587 7 log.go:172] (0xc002c0a210) (0xc000f80140) Create stream I0327 00:30:52.679595 7 log.go:172] (0xc002c0a210) (0xc000f80140) Stream added, broadcasting: 5 I0327 00:30:52.680318 7 log.go:172] (0xc002c0a210) Reply frame received for 5 I0327 00:30:52.780230 7 log.go:172] (0xc002c0a210) Data frame received for 3 I0327 00:30:52.780280 7 log.go:172] (0xc001968140) (3) Data frame handling I0327 00:30:52.780317 7 log.go:172] (0xc001968140) (3) Data frame sent I0327 00:30:52.781094 7 log.go:172] (0xc002c0a210) Data frame received for 3 I0327 00:30:52.781207 7 log.go:172] (0xc001968140) (3) Data frame handling I0327 00:30:52.781238 7 log.go:172] (0xc002c0a210) Data frame received for 5 I0327 00:30:52.781252 7 log.go:172] (0xc000f80140) (5) Data frame handling I0327 00:30:52.783236 7 log.go:172] (0xc002c0a210) Data frame received for 1 I0327 00:30:52.783271 7 log.go:172] (0xc001df9180) (1) Data frame handling I0327 00:30:52.783289 7 log.go:172] (0xc001df9180) (1) Data frame sent I0327 00:30:52.783304 7 log.go:172] (0xc002c0a210) (0xc001df9180) Stream removed, broadcasting: 1 I0327 00:30:52.783328 7 log.go:172] (0xc002c0a210) Go away received I0327 00:30:52.783391 7 log.go:172] (0xc002c0a210) (0xc001df9180) Stream removed, broadcasting: 1 I0327 00:30:52.783407 7 log.go:172] (0xc002c0a210) (0xc001968140) Stream removed, broadcasting: 3 I0327 00:30:52.783414 7 log.go:172] (0xc002c0a210) (0xc000f80140) Stream removed, broadcasting: 5 Mar 27 00:30:52.783: INFO: Waiting for responses: map[] Mar 27 00:30:52.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.171:8080/dial?request=hostname&protocol=http&host=10.244.1.45&port=8080&tries=1'] Namespace:pod-network-test-182 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:30:52.787: INFO: >>> kubeConfig: /root/.kube/config I0327 00:30:52.822276 7 log.go:172] (0xc0026c8840) (0xc000f80a00) Create stream I0327 00:30:52.822304 7 log.go:172] (0xc0026c8840) (0xc000f80a00) Stream added, broadcasting: 1 I0327 00:30:52.824226 7 log.go:172] (0xc0026c8840) Reply frame received for 1 I0327 00:30:52.824274 7 log.go:172] (0xc0026c8840) (0xc001968320) Create stream I0327 00:30:52.824289 7 log.go:172] (0xc0026c8840) (0xc001968320) Stream added, broadcasting: 3 I0327 00:30:52.825614 7 log.go:172] (0xc0026c8840) Reply frame received for 3 I0327 00:30:52.825653 7 log.go:172] (0xc0026c8840) (0xc001df9220) Create stream I0327 00:30:52.825670 7 log.go:172] (0xc0026c8840) (0xc001df9220) Stream added, broadcasting: 5 I0327 00:30:52.826856 7 log.go:172] (0xc0026c8840) Reply frame received for 5 I0327 00:30:52.887129 7 log.go:172] (0xc0026c8840) Data frame received for 3 I0327 00:30:52.887151 7 log.go:172] (0xc001968320) (3) Data frame handling I0327 00:30:52.887164 7 log.go:172] (0xc001968320) (3) Data frame sent I0327 00:30:52.887867 7 log.go:172] (0xc0026c8840) Data frame received for 5 I0327 00:30:52.887905 7 log.go:172] (0xc001df9220) (5) Data frame handling I0327 00:30:52.888008 7 log.go:172] (0xc0026c8840) Data frame received for 3 I0327 00:30:52.888029 7 log.go:172] (0xc001968320) (3) Data frame handling I0327 00:30:52.889380 7 log.go:172] (0xc0026c8840) Data frame received for 1 I0327 00:30:52.889399 7 log.go:172] (0xc000f80a00) (1) Data frame handling I0327 00:30:52.889413 7 log.go:172] (0xc000f80a00) (1) Data frame sent I0327 00:30:52.889439 7 log.go:172] (0xc0026c8840) (0xc000f80a00) Stream removed, broadcasting: 1 I0327 00:30:52.889491 7 log.go:172] (0xc0026c8840) (0xc000f80a00) Stream removed, broadcasting: 1 I0327 00:30:52.889501 7 log.go:172] (0xc0026c8840) (0xc001968320) Stream removed, broadcasting: 3 I0327 00:30:52.889629 7 log.go:172] (0xc0026c8840) (0xc001df9220) Stream removed, broadcasting: 5 Mar 27 00:30:52.889: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 I0327 00:30:52.889724 7 log.go:172] (0xc0026c8840) Go away received Mar 27 00:30:52.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-182" for this suite. • [SLOW TEST:24.511 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3394,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:30:52.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:31:09.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1651" for this suite. • [SLOW TEST:16.302 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":196,"skipped":3397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:31:09.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 27 00:31:09.259: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:31:15.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-224" for this suite. • [SLOW TEST:6.526 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":197,"skipped":3453,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:31:15.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:31:21.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3082" for this suite. STEP: Destroying namespace "nsdeletetest-1917" for this suite. Mar 27 00:31:21.984: INFO: Namespace nsdeletetest-1917 was already deleted STEP: Destroying namespace "nsdeletetest-8107" for this suite. • [SLOW TEST:6.260 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":198,"skipped":3466,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:31:21.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:31:22.736: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:31:24.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865882, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865882, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865882, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865882, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:31:27.819: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:31:37.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9623" for this suite. STEP: Destroying namespace "webhook-9623-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.100 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":199,"skipped":3476,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:31:38.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 27 00:31:38.173: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3427 /api/v1/namespaces/watch-3427/configmaps/e2e-watch-test-watch-closed 86fd480c-5114-4db2-9824-46678af85537 3082467 0 2020-03-27 00:31:38 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:31:38.173: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3427 /api/v1/namespaces/watch-3427/configmaps/e2e-watch-test-watch-closed 86fd480c-5114-4db2-9824-46678af85537 3082468 0 2020-03-27 00:31:38 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 27 00:31:38.184: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3427 /api/v1/namespaces/watch-3427/configmaps/e2e-watch-test-watch-closed 86fd480c-5114-4db2-9824-46678af85537 3082469 0 2020-03-27 00:31:38 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:31:38.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3427 /api/v1/namespaces/watch-3427/configmaps/e2e-watch-test-watch-closed 86fd480c-5114-4db2-9824-46678af85537 3082470 0 2020-03-27 00:31:38 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:31:38.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3427" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":200,"skipped":3478,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:31:38.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-d98dd5fe-3da9-4782-96ca-22662ce61aa1 STEP: Creating a pod to test consume secrets Mar 27 00:31:38.268: INFO: Waiting up to 5m0s for pod "pod-secrets-54dff88f-7749-44d3-b64e-ff094572057c" in namespace "secrets-6382" to be "Succeeded or Failed" Mar 27 00:31:38.280: INFO: Pod "pod-secrets-54dff88f-7749-44d3-b64e-ff094572057c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.906001ms Mar 27 00:31:40.303: INFO: Pod "pod-secrets-54dff88f-7749-44d3-b64e-ff094572057c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034954154s Mar 27 00:31:42.307: INFO: Pod "pod-secrets-54dff88f-7749-44d3-b64e-ff094572057c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03894753s STEP: Saw pod success Mar 27 00:31:42.307: INFO: Pod "pod-secrets-54dff88f-7749-44d3-b64e-ff094572057c" satisfied condition "Succeeded or Failed" Mar 27 00:31:42.310: INFO: Trying to get logs from node latest-worker pod pod-secrets-54dff88f-7749-44d3-b64e-ff094572057c container secret-volume-test: STEP: delete the pod Mar 27 00:31:42.339: INFO: Waiting for pod pod-secrets-54dff88f-7749-44d3-b64e-ff094572057c to disappear Mar 27 00:31:42.362: INFO: Pod pod-secrets-54dff88f-7749-44d3-b64e-ff094572057c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:31:42.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6382" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3481,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:31:42.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 27 00:31:42.418: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Mar 27 00:31:42.987: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 27 00:31:45.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865903, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865903, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865903, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865902, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 27 00:31:47.581: INFO: Waited 515.338673ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:31:48.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7337" for this suite. • [SLOW TEST:5.858 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":202,"skipped":3503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:31:48.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:31:48.488: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:31:49.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7115" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":203,"skipped":3585,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:31:49.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 27 00:31:54.163: INFO: Successfully updated pod "annotationupdate959800cf-e272-4d32-90b6-597dc34f04ed" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:31:56.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7008" for this suite. • [SLOW TEST:6.657 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3588,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:31:56.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 27 00:32:02.737: INFO: 0 pods remaining Mar 27 00:32:02.737: INFO: 0 pods has nil DeletionTimestamp Mar 27 00:32:02.737: INFO: STEP: Gathering metrics W0327 00:32:03.588414 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 27 00:32:03.588: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:32:03.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6848" for this suite. • [SLOW TEST:7.794 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":205,"skipped":3599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:32:03.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4383.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4383.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4383.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4383.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4383.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4383.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 27 00:32:10.753: INFO: DNS probes using dns-4383/dns-test-f6ba8755-d23e-49aa-913f-74c33959a0f9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:32:10.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4383" for this suite. • [SLOW TEST:6.866 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":206,"skipped":3667,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:32:10.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:32:10.957: INFO: Creating ReplicaSet my-hostname-basic-f8037e1e-afbc-428a-a9cb-116cbb2243ae Mar 27 00:32:11.148: INFO: Pod name my-hostname-basic-f8037e1e-afbc-428a-a9cb-116cbb2243ae: Found 0 pods out of 1 Mar 27 00:32:16.151: INFO: Pod name my-hostname-basic-f8037e1e-afbc-428a-a9cb-116cbb2243ae: Found 1 pods out of 1 Mar 27 00:32:16.151: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f8037e1e-afbc-428a-a9cb-116cbb2243ae" is running Mar 27 00:32:16.159: INFO: Pod "my-hostname-basic-f8037e1e-afbc-428a-a9cb-116cbb2243ae-wcgt6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-27 00:32:11 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-27 00:32:15 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-27 00:32:15 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-27 00:32:11 +0000 UTC Reason: Message:}]) Mar 27 00:32:16.159: INFO: Trying to dial the pod Mar 27 00:32:21.171: INFO: Controller my-hostname-basic-f8037e1e-afbc-428a-a9cb-116cbb2243ae: Got expected result from replica 1 [my-hostname-basic-f8037e1e-afbc-428a-a9cb-116cbb2243ae-wcgt6]: "my-hostname-basic-f8037e1e-afbc-428a-a9cb-116cbb2243ae-wcgt6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:32:21.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6162" for this suite. • [SLOW TEST:10.325 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":207,"skipped":3685,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:32:21.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-f67cfd95-6474-47ce-827b-743fc8e3e3cd STEP: Creating a pod to test consume secrets Mar 27 00:32:21.311: INFO: Waiting up to 5m0s for pod "pod-secrets-75d9937b-1b47-43c0-8a9f-4afe4daf16c7" in namespace "secrets-2938" to be "Succeeded or Failed" Mar 27 00:32:21.329: INFO: Pod "pod-secrets-75d9937b-1b47-43c0-8a9f-4afe4daf16c7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.415381ms Mar 27 00:32:23.332: INFO: Pod "pod-secrets-75d9937b-1b47-43c0-8a9f-4afe4daf16c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020784699s Mar 27 00:32:25.336: INFO: Pod "pod-secrets-75d9937b-1b47-43c0-8a9f-4afe4daf16c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024901337s STEP: Saw pod success Mar 27 00:32:25.336: INFO: Pod "pod-secrets-75d9937b-1b47-43c0-8a9f-4afe4daf16c7" satisfied condition "Succeeded or Failed" Mar 27 00:32:25.340: INFO: Trying to get logs from node latest-worker pod pod-secrets-75d9937b-1b47-43c0-8a9f-4afe4daf16c7 container secret-volume-test: STEP: delete the pod Mar 27 00:32:25.382: INFO: Waiting for pod pod-secrets-75d9937b-1b47-43c0-8a9f-4afe4daf16c7 to disappear Mar 27 00:32:25.398: INFO: Pod pod-secrets-75d9937b-1b47-43c0-8a9f-4afe4daf16c7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:32:25.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2938" for this suite. STEP: Destroying namespace "secret-namespace-1420" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3707,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:32:25.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 27 00:32:25.465: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 27 00:32:25.476: INFO: Waiting for terminating namespaces to be deleted... Mar 27 00:32:25.478: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 27 00:32:25.483: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:32:25.483: INFO: Container kindnet-cni ready: true, restart count 0 Mar 27 00:32:25.483: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:32:25.483: INFO: Container kube-proxy ready: true, restart count 0 Mar 27 00:32:25.483: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 27 00:32:25.497: INFO: my-hostname-basic-f8037e1e-afbc-428a-a9cb-116cbb2243ae-wcgt6 from replicaset-6162 started at 2020-03-27 00:32:11 +0000 UTC (1 container statuses recorded) Mar 27 00:32:25.497: INFO: Container my-hostname-basic-f8037e1e-afbc-428a-a9cb-116cbb2243ae ready: true, restart count 0 Mar 27 00:32:25.497: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:32:25.497: INFO: Container kindnet-cni ready: true, restart count 0 Mar 27 00:32:25.497: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:32:25.497: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-52a26db4-667e-482c-a254-cc89d087e30c 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-52a26db4-667e-482c-a254-cc89d087e30c off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-52a26db4-667e-482c-a254-cc89d087e30c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:32:41.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6753" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.266 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":209,"skipped":3709,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:32:41.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 27 00:32:42.369: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 27 00:32:44.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865962, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865962, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865962, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865962, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:32:47.437: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:32:47.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:32:48.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5807" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.161 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":210,"skipped":3710,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:32:48.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:33:05.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8097" for this suite. • [SLOW TEST:16.172 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":211,"skipped":3711,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:33:05.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 27 00:33:05.084: INFO: Waiting up to 5m0s for pod "pod-9cfe3ce4-1f61-429a-9c0c-acd07ac0f2c6" in namespace "emptydir-797" to be "Succeeded or Failed" Mar 27 00:33:05.101: INFO: Pod "pod-9cfe3ce4-1f61-429a-9c0c-acd07ac0f2c6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.734842ms Mar 27 00:33:07.106: INFO: Pod "pod-9cfe3ce4-1f61-429a-9c0c-acd07ac0f2c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02124087s Mar 27 00:33:09.110: INFO: Pod "pod-9cfe3ce4-1f61-429a-9c0c-acd07ac0f2c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025262988s STEP: Saw pod success Mar 27 00:33:09.110: INFO: Pod "pod-9cfe3ce4-1f61-429a-9c0c-acd07ac0f2c6" satisfied condition "Succeeded or Failed" Mar 27 00:33:09.113: INFO: Trying to get logs from node latest-worker pod pod-9cfe3ce4-1f61-429a-9c0c-acd07ac0f2c6 container test-container: STEP: delete the pod Mar 27 00:33:09.145: INFO: Waiting for pod pod-9cfe3ce4-1f61-429a-9c0c-acd07ac0f2c6 to disappear Mar 27 00:33:09.159: INFO: Pod pod-9cfe3ce4-1f61-429a-9c0c-acd07ac0f2c6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:33:09.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-797" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3722,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:33:09.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 27 00:33:13.312: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:33:13.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4960" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3732,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:33:13.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:33:13.951: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:33:15.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865993, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865993, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865994, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865993, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 27 00:33:17.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865993, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865993, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865994, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720865993, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:33:21.041: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:33:33.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8749" for this suite. STEP: Destroying namespace "webhook-8749-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.973 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":214,"skipped":3734,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:33:33.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6807 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6807 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6807 Mar 27 00:33:33.455: INFO: Found 0 stateful pods, waiting for 1 Mar 27 00:33:43.461: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 27 00:33:43.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6807 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 27 00:33:46.142: INFO: stderr: "I0327 00:33:46.028509 2654 log.go:172] (0xc0003c6630) (0xc0005c1540) Create stream\nI0327 00:33:46.028555 2654 log.go:172] (0xc0003c6630) (0xc0005c1540) Stream added, broadcasting: 1\nI0327 00:33:46.031234 2654 log.go:172] (0xc0003c6630) Reply frame received for 1\nI0327 00:33:46.031269 2654 log.go:172] (0xc0003c6630) (0xc000620000) Create stream\nI0327 00:33:46.031277 2654 log.go:172] (0xc0003c6630) (0xc000620000) Stream added, broadcasting: 3\nI0327 00:33:46.032283 2654 log.go:172] (0xc0003c6630) Reply frame received for 3\nI0327 00:33:46.032324 2654 log.go:172] (0xc0003c6630) (0xc0006200a0) Create stream\nI0327 00:33:46.032336 2654 log.go:172] (0xc0003c6630) (0xc0006200a0) Stream added, broadcasting: 5\nI0327 00:33:46.033421 2654 log.go:172] (0xc0003c6630) Reply frame received for 5\nI0327 00:33:46.106612 2654 log.go:172] (0xc0003c6630) Data frame received for 5\nI0327 00:33:46.106665 2654 log.go:172] (0xc0006200a0) (5) Data frame handling\nI0327 00:33:46.106710 2654 log.go:172] (0xc0006200a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0327 00:33:46.134886 2654 log.go:172] (0xc0003c6630) Data frame received for 5\nI0327 00:33:46.134980 2654 log.go:172] (0xc0006200a0) (5) Data frame handling\nI0327 00:33:46.135015 2654 log.go:172] (0xc0003c6630) Data frame received for 3\nI0327 00:33:46.135033 2654 log.go:172] (0xc000620000) (3) Data frame handling\nI0327 00:33:46.135045 2654 log.go:172] (0xc000620000) (3) Data frame sent\nI0327 00:33:46.135199 2654 log.go:172] (0xc0003c6630) Data frame received for 3\nI0327 00:33:46.135251 2654 log.go:172] (0xc000620000) (3) Data frame handling\nI0327 00:33:46.136771 2654 log.go:172] (0xc0003c6630) Data frame received for 1\nI0327 00:33:46.136799 2654 log.go:172] (0xc0005c1540) (1) Data frame handling\nI0327 00:33:46.136815 2654 log.go:172] (0xc0005c1540) (1) Data frame sent\nI0327 00:33:46.136843 2654 log.go:172] (0xc0003c6630) (0xc0005c1540) Stream removed, broadcasting: 1\nI0327 00:33:46.136871 2654 log.go:172] (0xc0003c6630) Go away received\nI0327 00:33:46.137433 2654 log.go:172] (0xc0003c6630) (0xc0005c1540) Stream removed, broadcasting: 1\nI0327 00:33:46.137455 2654 log.go:172] (0xc0003c6630) (0xc000620000) Stream removed, broadcasting: 3\nI0327 00:33:46.137464 2654 log.go:172] (0xc0003c6630) (0xc0006200a0) Stream removed, broadcasting: 5\n" Mar 27 00:33:46.142: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 27 00:33:46.142: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 27 00:33:46.145: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 27 00:33:56.150: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 27 00:33:56.150: INFO: Waiting for statefulset status.replicas updated to 0 Mar 27 00:33:56.174: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999726s Mar 27 00:33:57.179: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.984474834s Mar 27 00:33:58.183: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.980114407s Mar 27 00:33:59.188: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.975263071s Mar 27 00:34:00.193: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.970684398s Mar 27 00:34:01.197: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.965971857s Mar 27 00:34:02.202: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.96198293s Mar 27 00:34:03.206: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.957181904s Mar 27 00:34:04.210: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.952731592s Mar 27 00:34:05.216: INFO: Verifying statefulset ss doesn't scale past 1 for another 948.277508ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6807 Mar 27 00:34:06.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6807 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 27 00:34:06.440: INFO: stderr: "I0327 00:34:06.350648 2683 log.go:172] (0xc00003a0b0) (0xc000a8a460) Create stream\nI0327 00:34:06.350732 2683 log.go:172] (0xc00003a0b0) (0xc000a8a460) Stream added, broadcasting: 1\nI0327 00:34:06.354359 2683 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0327 00:34:06.354498 2683 log.go:172] (0xc00003a0b0) (0xc0008f8000) Create stream\nI0327 00:34:06.354533 2683 log.go:172] (0xc00003a0b0) (0xc0008f8000) Stream added, broadcasting: 3\nI0327 00:34:06.356350 2683 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0327 00:34:06.356392 2683 log.go:172] (0xc00003a0b0) (0xc000a8a000) Create stream\nI0327 00:34:06.356411 2683 log.go:172] (0xc00003a0b0) (0xc000a8a000) Stream added, broadcasting: 5\nI0327 00:34:06.357413 2683 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0327 00:34:06.433986 2683 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0327 00:34:06.434014 2683 log.go:172] (0xc000a8a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0327 00:34:06.434035 2683 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0327 00:34:06.434062 2683 log.go:172] (0xc0008f8000) (3) Data frame handling\nI0327 00:34:06.434075 2683 log.go:172] (0xc0008f8000) (3) Data frame sent\nI0327 00:34:06.434087 2683 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0327 00:34:06.434098 2683 log.go:172] (0xc0008f8000) (3) Data frame handling\nI0327 00:34:06.434146 2683 log.go:172] (0xc000a8a000) (5) Data frame sent\nI0327 00:34:06.434305 2683 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0327 00:34:06.434332 2683 log.go:172] (0xc000a8a000) (5) Data frame handling\nI0327 00:34:06.435723 2683 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0327 00:34:06.435743 2683 log.go:172] (0xc000a8a460) (1) Data frame handling\nI0327 00:34:06.435754 2683 log.go:172] (0xc000a8a460) (1) Data frame sent\nI0327 00:34:06.435781 2683 log.go:172] (0xc00003a0b0) (0xc000a8a460) Stream removed, broadcasting: 1\nI0327 00:34:06.435948 2683 log.go:172] (0xc00003a0b0) Go away received\nI0327 00:34:06.436091 2683 log.go:172] (0xc00003a0b0) (0xc000a8a460) Stream removed, broadcasting: 1\nI0327 00:34:06.436106 2683 log.go:172] (0xc00003a0b0) (0xc0008f8000) Stream removed, broadcasting: 3\nI0327 00:34:06.436120 2683 log.go:172] (0xc00003a0b0) (0xc000a8a000) Stream removed, broadcasting: 5\n" Mar 27 00:34:06.441: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 27 00:34:06.441: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 27 00:34:06.444: INFO: Found 1 stateful pods, waiting for 3 Mar 27 00:34:16.449: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 27 00:34:16.449: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 27 00:34:16.449: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 27 00:34:16.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6807 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 27 00:34:16.695: INFO: stderr: "I0327 00:34:16.597894 2703 log.go:172] (0xc00003a160) (0xc0006e92c0) Create stream\nI0327 00:34:16.597943 2703 log.go:172] (0xc00003a160) (0xc0006e92c0) Stream added, broadcasting: 1\nI0327 00:34:16.599795 2703 log.go:172] (0xc00003a160) Reply frame received for 1\nI0327 00:34:16.599822 2703 log.go:172] (0xc00003a160) (0xc00098a000) Create stream\nI0327 00:34:16.599830 2703 log.go:172] (0xc00003a160) (0xc00098a000) Stream added, broadcasting: 3\nI0327 00:34:16.600559 2703 log.go:172] (0xc00003a160) Reply frame received for 3\nI0327 00:34:16.600594 2703 log.go:172] (0xc00003a160) (0xc00098a0a0) Create stream\nI0327 00:34:16.600602 2703 log.go:172] (0xc00003a160) (0xc00098a0a0) Stream added, broadcasting: 5\nI0327 00:34:16.601367 2703 log.go:172] (0xc00003a160) Reply frame received for 5\nI0327 00:34:16.688924 2703 log.go:172] (0xc00003a160) Data frame received for 5\nI0327 00:34:16.688970 2703 log.go:172] (0xc00098a0a0) (5) Data frame handling\nI0327 00:34:16.688993 2703 log.go:172] (0xc00098a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0327 00:34:16.689016 2703 log.go:172] (0xc00003a160) Data frame received for 3\nI0327 00:34:16.689026 2703 log.go:172] (0xc00098a000) (3) Data frame handling\nI0327 00:34:16.689038 2703 log.go:172] (0xc00098a000) (3) Data frame sent\nI0327 00:34:16.689060 2703 log.go:172] (0xc00003a160) Data frame received for 3\nI0327 00:34:16.689084 2703 log.go:172] (0xc00098a000) (3) Data frame handling\nI0327 00:34:16.689288 2703 log.go:172] (0xc00003a160) Data frame received for 5\nI0327 00:34:16.689313 2703 log.go:172] (0xc00098a0a0) (5) Data frame handling\nI0327 00:34:16.690946 2703 log.go:172] (0xc00003a160) Data frame received for 1\nI0327 00:34:16.690971 2703 log.go:172] (0xc0006e92c0) (1) Data frame handling\nI0327 00:34:16.690984 2703 log.go:172] (0xc0006e92c0) (1) Data frame sent\nI0327 00:34:16.691001 2703 log.go:172] (0xc00003a160) (0xc0006e92c0) Stream removed, broadcasting: 1\nI0327 00:34:16.691022 2703 log.go:172] (0xc00003a160) Go away received\nI0327 00:34:16.691441 2703 log.go:172] (0xc00003a160) (0xc0006e92c0) Stream removed, broadcasting: 1\nI0327 00:34:16.691466 2703 log.go:172] (0xc00003a160) (0xc00098a000) Stream removed, broadcasting: 3\nI0327 00:34:16.691478 2703 log.go:172] (0xc00003a160) (0xc00098a0a0) Stream removed, broadcasting: 5\n" Mar 27 00:34:16.696: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 27 00:34:16.696: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 27 00:34:16.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6807 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 27 00:34:16.944: INFO: stderr: "I0327 00:34:16.834505 2726 log.go:172] (0xc00092c6e0) (0xc000699540) Create stream\nI0327 00:34:16.834569 2726 log.go:172] (0xc00092c6e0) (0xc000699540) Stream added, broadcasting: 1\nI0327 00:34:16.837401 2726 log.go:172] (0xc00092c6e0) Reply frame received for 1\nI0327 00:34:16.837461 2726 log.go:172] (0xc00092c6e0) (0xc0005492c0) Create stream\nI0327 00:34:16.837490 2726 log.go:172] (0xc00092c6e0) (0xc0005492c0) Stream added, broadcasting: 3\nI0327 00:34:16.838599 2726 log.go:172] (0xc00092c6e0) Reply frame received for 3\nI0327 00:34:16.838669 2726 log.go:172] (0xc00092c6e0) (0xc000549360) Create stream\nI0327 00:34:16.838694 2726 log.go:172] (0xc00092c6e0) (0xc000549360) Stream added, broadcasting: 5\nI0327 00:34:16.839671 2726 log.go:172] (0xc00092c6e0) Reply frame received for 5\nI0327 00:34:16.905413 2726 log.go:172] (0xc00092c6e0) Data frame received for 5\nI0327 00:34:16.905451 2726 log.go:172] (0xc000549360) (5) Data frame handling\nI0327 00:34:16.905478 2726 log.go:172] (0xc000549360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0327 00:34:16.936241 2726 log.go:172] (0xc00092c6e0) Data frame received for 5\nI0327 00:34:16.936257 2726 log.go:172] (0xc000549360) (5) Data frame handling\nI0327 00:34:16.936276 2726 log.go:172] (0xc00092c6e0) Data frame received for 3\nI0327 00:34:16.936285 2726 log.go:172] (0xc0005492c0) (3) Data frame handling\nI0327 00:34:16.936294 2726 log.go:172] (0xc0005492c0) (3) Data frame sent\nI0327 00:34:16.936718 2726 log.go:172] (0xc00092c6e0) Data frame received for 3\nI0327 00:34:16.936735 2726 log.go:172] (0xc0005492c0) (3) Data frame handling\nI0327 00:34:16.938708 2726 log.go:172] (0xc00092c6e0) Data frame received for 1\nI0327 00:34:16.938727 2726 log.go:172] (0xc000699540) (1) Data frame handling\nI0327 00:34:16.938747 2726 log.go:172] (0xc000699540) (1) Data frame sent\nI0327 00:34:16.938760 2726 log.go:172] (0xc00092c6e0) (0xc000699540) Stream removed, broadcasting: 1\nI0327 00:34:16.939013 2726 log.go:172] (0xc00092c6e0) Go away received\nI0327 00:34:16.939106 2726 log.go:172] (0xc00092c6e0) (0xc000699540) Stream removed, broadcasting: 1\nI0327 00:34:16.939159 2726 log.go:172] (0xc00092c6e0) (0xc0005492c0) Stream removed, broadcasting: 3\nI0327 00:34:16.939177 2726 log.go:172] (0xc00092c6e0) (0xc000549360) Stream removed, broadcasting: 5\n" Mar 27 00:34:16.944: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 27 00:34:16.944: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 27 00:34:16.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6807 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 27 00:34:17.170: INFO: stderr: "I0327 00:34:17.075993 2747 log.go:172] (0xc0003c9340) (0xc0005fb4a0) Create stream\nI0327 00:34:17.076050 2747 log.go:172] (0xc0003c9340) (0xc0005fb4a0) Stream added, broadcasting: 1\nI0327 00:34:17.078646 2747 log.go:172] (0xc0003c9340) Reply frame received for 1\nI0327 00:34:17.078695 2747 log.go:172] (0xc0003c9340) (0xc0009bc000) Create stream\nI0327 00:34:17.078709 2747 log.go:172] (0xc0003c9340) (0xc0009bc000) Stream added, broadcasting: 3\nI0327 00:34:17.079963 2747 log.go:172] (0xc0003c9340) Reply frame received for 3\nI0327 00:34:17.080026 2747 log.go:172] (0xc0003c9340) (0xc000440000) Create stream\nI0327 00:34:17.080047 2747 log.go:172] (0xc0003c9340) (0xc000440000) Stream added, broadcasting: 5\nI0327 00:34:17.081408 2747 log.go:172] (0xc0003c9340) Reply frame received for 5\nI0327 00:34:17.136821 2747 log.go:172] (0xc0003c9340) Data frame received for 5\nI0327 00:34:17.136853 2747 log.go:172] (0xc000440000) (5) Data frame handling\nI0327 00:34:17.136877 2747 log.go:172] (0xc000440000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0327 00:34:17.164141 2747 log.go:172] (0xc0003c9340) Data frame received for 5\nI0327 00:34:17.164176 2747 log.go:172] (0xc0003c9340) Data frame received for 3\nI0327 00:34:17.164207 2747 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0327 00:34:17.164221 2747 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0327 00:34:17.164233 2747 log.go:172] (0xc0003c9340) Data frame received for 3\nI0327 00:34:17.164243 2747 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0327 00:34:17.164273 2747 log.go:172] (0xc000440000) (5) Data frame handling\nI0327 00:34:17.166821 2747 log.go:172] (0xc0003c9340) Data frame received for 1\nI0327 00:34:17.166891 2747 log.go:172] (0xc0005fb4a0) (1) Data frame handling\nI0327 00:34:17.166936 2747 log.go:172] (0xc0005fb4a0) (1) Data frame sent\nI0327 00:34:17.167030 2747 log.go:172] (0xc0003c9340) (0xc0005fb4a0) Stream removed, broadcasting: 1\nI0327 00:34:17.167335 2747 log.go:172] (0xc0003c9340) (0xc0005fb4a0) Stream removed, broadcasting: 1\nI0327 00:34:17.167355 2747 log.go:172] (0xc0003c9340) (0xc0009bc000) Stream removed, broadcasting: 3\nI0327 00:34:17.167496 2747 log.go:172] (0xc0003c9340) (0xc000440000) Stream removed, broadcasting: 5\n" Mar 27 00:34:17.170: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 27 00:34:17.170: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 27 00:34:17.170: INFO: Waiting for statefulset status.replicas updated to 0 Mar 27 00:34:17.174: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 27 00:34:27.182: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 27 00:34:27.182: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 27 00:34:27.182: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 27 00:34:27.204: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999945s Mar 27 00:34:28.209: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984853315s Mar 27 00:34:29.214: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979720645s Mar 27 00:34:30.219: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974154965s Mar 27 00:34:31.225: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.968999157s Mar 27 00:34:32.235: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.963384478s Mar 27 00:34:33.251: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.953001219s Mar 27 00:34:34.256: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.93696315s Mar 27 00:34:35.260: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.932341281s Mar 27 00:34:36.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 927.983343ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6807 Mar 27 00:34:37.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6807 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 27 00:34:37.448: INFO: stderr: "I0327 00:34:37.389777 2768 log.go:172] (0xc00003a370) (0xc00042a000) Create stream\nI0327 00:34:37.389873 2768 log.go:172] (0xc00003a370) (0xc00042a000) Stream added, broadcasting: 1\nI0327 00:34:37.391396 2768 log.go:172] (0xc00003a370) Reply frame received for 1\nI0327 00:34:37.391440 2768 log.go:172] (0xc00003a370) (0xc000851360) Create stream\nI0327 00:34:37.391455 2768 log.go:172] (0xc00003a370) (0xc000851360) Stream added, broadcasting: 3\nI0327 00:34:37.392247 2768 log.go:172] (0xc00003a370) Reply frame received for 3\nI0327 00:34:37.392288 2768 log.go:172] (0xc00003a370) (0xc000851540) Create stream\nI0327 00:34:37.392301 2768 log.go:172] (0xc00003a370) (0xc000851540) Stream added, broadcasting: 5\nI0327 00:34:37.393343 2768 log.go:172] (0xc00003a370) Reply frame received for 5\nI0327 00:34:37.441984 2768 log.go:172] (0xc00003a370) Data frame received for 3\nI0327 00:34:37.442039 2768 log.go:172] (0xc000851360) (3) Data frame handling\nI0327 00:34:37.442090 2768 log.go:172] (0xc000851360) (3) Data frame sent\nI0327 00:34:37.442129 2768 log.go:172] (0xc00003a370) Data frame received for 3\nI0327 00:34:37.442154 2768 log.go:172] (0xc000851360) (3) Data frame handling\nI0327 00:34:37.442170 2768 log.go:172] (0xc00003a370) Data frame received for 5\nI0327 00:34:37.442179 2768 log.go:172] (0xc000851540) (5) Data frame handling\nI0327 00:34:37.442190 2768 log.go:172] (0xc000851540) (5) Data frame sent\nI0327 00:34:37.442204 2768 log.go:172] (0xc00003a370) Data frame received for 5\nI0327 00:34:37.442223 2768 log.go:172] (0xc000851540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0327 00:34:37.443496 2768 log.go:172] (0xc00003a370) Data frame received for 1\nI0327 00:34:37.443518 2768 log.go:172] (0xc00042a000) (1) Data frame handling\nI0327 00:34:37.443535 2768 log.go:172] (0xc00042a000) (1) Data frame sent\nI0327 00:34:37.443552 2768 log.go:172] (0xc00003a370) (0xc00042a000) Stream removed, broadcasting: 1\nI0327 00:34:37.443575 2768 log.go:172] (0xc00003a370) Go away received\nI0327 00:34:37.443964 2768 log.go:172] (0xc00003a370) (0xc00042a000) Stream removed, broadcasting: 1\nI0327 00:34:37.443987 2768 log.go:172] (0xc00003a370) (0xc000851360) Stream removed, broadcasting: 3\nI0327 00:34:37.444002 2768 log.go:172] (0xc00003a370) (0xc000851540) Stream removed, broadcasting: 5\n" Mar 27 00:34:37.448: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 27 00:34:37.448: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 27 00:34:37.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6807 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 27 00:34:37.640: INFO: stderr: "I0327 00:34:37.569828 2789 log.go:172] (0xc000753d90) (0xc0005cd5e0) Create stream\nI0327 00:34:37.569893 2789 log.go:172] (0xc000753d90) (0xc0005cd5e0) Stream added, broadcasting: 1\nI0327 00:34:37.572378 2789 log.go:172] (0xc000753d90) Reply frame received for 1\nI0327 00:34:37.572414 2789 log.go:172] (0xc000753d90) (0xc000b9a000) Create stream\nI0327 00:34:37.572427 2789 log.go:172] (0xc000753d90) (0xc000b9a000) Stream added, broadcasting: 3\nI0327 00:34:37.573768 2789 log.go:172] (0xc000753d90) Reply frame received for 3\nI0327 00:34:37.573829 2789 log.go:172] (0xc000753d90) (0xc0005cd680) Create stream\nI0327 00:34:37.573851 2789 log.go:172] (0xc000753d90) (0xc0005cd680) Stream added, broadcasting: 5\nI0327 00:34:37.574894 2789 log.go:172] (0xc000753d90) Reply frame received for 5\nI0327 00:34:37.634442 2789 log.go:172] (0xc000753d90) Data frame received for 3\nI0327 00:34:37.634497 2789 log.go:172] (0xc000b9a000) (3) Data frame handling\nI0327 00:34:37.634522 2789 log.go:172] (0xc000b9a000) (3) Data frame sent\nI0327 00:34:37.634540 2789 log.go:172] (0xc000753d90) Data frame received for 3\nI0327 00:34:37.634551 2789 log.go:172] (0xc000b9a000) (3) Data frame handling\nI0327 00:34:37.634594 2789 log.go:172] (0xc000753d90) Data frame received for 5\nI0327 00:34:37.634622 2789 log.go:172] (0xc0005cd680) (5) Data frame handling\nI0327 00:34:37.634648 2789 log.go:172] (0xc0005cd680) (5) Data frame sent\nI0327 00:34:37.634667 2789 log.go:172] (0xc000753d90) Data frame received for 5\nI0327 00:34:37.634680 2789 log.go:172] (0xc0005cd680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0327 00:34:37.636333 2789 log.go:172] (0xc000753d90) Data frame received for 1\nI0327 00:34:37.636352 2789 log.go:172] (0xc0005cd5e0) (1) Data frame handling\nI0327 00:34:37.636361 2789 log.go:172] (0xc0005cd5e0) (1) Data frame sent\nI0327 00:34:37.636371 2789 log.go:172] (0xc000753d90) (0xc0005cd5e0) Stream removed, broadcasting: 1\nI0327 00:34:37.636532 2789 log.go:172] (0xc000753d90) Go away received\nI0327 00:34:37.636677 2789 log.go:172] (0xc000753d90) (0xc0005cd5e0) Stream removed, broadcasting: 1\nI0327 00:34:37.636694 2789 log.go:172] (0xc000753d90) (0xc000b9a000) Stream removed, broadcasting: 3\nI0327 00:34:37.636702 2789 log.go:172] (0xc000753d90) (0xc0005cd680) Stream removed, broadcasting: 5\n" Mar 27 00:34:37.640: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 27 00:34:37.640: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 27 00:34:37.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6807 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 27 00:34:37.848: INFO: stderr: "I0327 00:34:37.762419 2811 log.go:172] (0xc00003a420) (0xc0009d4000) Create stream\nI0327 00:34:37.762477 2811 log.go:172] (0xc00003a420) (0xc0009d4000) Stream added, broadcasting: 1\nI0327 00:34:37.764944 2811 log.go:172] (0xc00003a420) Reply frame received for 1\nI0327 00:34:37.765012 2811 log.go:172] (0xc00003a420) (0xc000a22000) Create stream\nI0327 00:34:37.765272 2811 log.go:172] (0xc00003a420) (0xc000a22000) Stream added, broadcasting: 3\nI0327 00:34:37.766239 2811 log.go:172] (0xc00003a420) Reply frame received for 3\nI0327 00:34:37.766304 2811 log.go:172] (0xc00003a420) (0xc0009d40a0) Create stream\nI0327 00:34:37.766320 2811 log.go:172] (0xc00003a420) (0xc0009d40a0) Stream added, broadcasting: 5\nI0327 00:34:37.767138 2811 log.go:172] (0xc00003a420) Reply frame received for 5\nI0327 00:34:37.841395 2811 log.go:172] (0xc00003a420) Data frame received for 5\nI0327 00:34:37.841422 2811 log.go:172] (0xc0009d40a0) (5) Data frame handling\nI0327 00:34:37.841436 2811 log.go:172] (0xc0009d40a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0327 00:34:37.843637 2811 log.go:172] (0xc00003a420) Data frame received for 3\nI0327 00:34:37.843658 2811 log.go:172] (0xc000a22000) (3) Data frame handling\nI0327 00:34:37.843678 2811 log.go:172] (0xc000a22000) (3) Data frame sent\nI0327 00:34:37.843688 2811 log.go:172] (0xc00003a420) Data frame received for 3\nI0327 00:34:37.843697 2811 log.go:172] (0xc000a22000) (3) Data frame handling\nI0327 00:34:37.843821 2811 log.go:172] (0xc00003a420) Data frame received for 5\nI0327 00:34:37.843840 2811 log.go:172] (0xc0009d40a0) (5) Data frame handling\nI0327 00:34:37.845081 2811 log.go:172] (0xc00003a420) Data frame received for 1\nI0327 00:34:37.845092 2811 log.go:172] (0xc0009d4000) (1) Data frame handling\nI0327 00:34:37.845099 2811 log.go:172] (0xc0009d4000) (1) Data frame sent\nI0327 00:34:37.845106 2811 log.go:172] (0xc00003a420) (0xc0009d4000) Stream removed, broadcasting: 1\nI0327 00:34:37.845386 2811 log.go:172] (0xc00003a420) Go away received\nI0327 00:34:37.845511 2811 log.go:172] (0xc00003a420) (0xc0009d4000) Stream removed, broadcasting: 1\nI0327 00:34:37.845531 2811 log.go:172] (0xc00003a420) (0xc000a22000) Stream removed, broadcasting: 3\nI0327 00:34:37.845544 2811 log.go:172] (0xc00003a420) (0xc0009d40a0) Stream removed, broadcasting: 5\n" Mar 27 00:34:37.849: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 27 00:34:37.849: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 27 00:34:37.849: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 27 00:34:47.861: INFO: Deleting all statefulset in ns statefulset-6807 Mar 27 00:34:47.865: INFO: Scaling statefulset ss to 0 Mar 27 00:34:47.873: INFO: Waiting for statefulset status.replicas updated to 0 Mar 27 00:34:47.876: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:34:47.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6807" for this suite. • [SLOW TEST:74.572 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":215,"skipped":3756,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:34:47.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-cc761f31-d770-40c1-9530-2c3ef6106194 STEP: Creating a pod to test consume secrets Mar 27 00:34:48.041: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2297afb3-c5d7-41f4-8b88-e2fcc0b78152" in namespace "projected-9919" to be "Succeeded or Failed" Mar 27 00:34:48.045: INFO: Pod "pod-projected-secrets-2297afb3-c5d7-41f4-8b88-e2fcc0b78152": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203288ms Mar 27 00:34:50.049: INFO: Pod "pod-projected-secrets-2297afb3-c5d7-41f4-8b88-e2fcc0b78152": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00863011s Mar 27 00:34:52.054: INFO: Pod "pod-projected-secrets-2297afb3-c5d7-41f4-8b88-e2fcc0b78152": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012879201s STEP: Saw pod success Mar 27 00:34:52.054: INFO: Pod "pod-projected-secrets-2297afb3-c5d7-41f4-8b88-e2fcc0b78152" satisfied condition "Succeeded or Failed" Mar 27 00:34:52.056: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-2297afb3-c5d7-41f4-8b88-e2fcc0b78152 container secret-volume-test: STEP: delete the pod Mar 27 00:34:52.105: INFO: Waiting for pod pod-projected-secrets-2297afb3-c5d7-41f4-8b88-e2fcc0b78152 to disappear Mar 27 00:34:52.115: INFO: Pod pod-projected-secrets-2297afb3-c5d7-41f4-8b88-e2fcc0b78152 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:34:52.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9919" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3764,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:34:52.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 27 00:34:52.164: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:34:59.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5812" for this suite. • [SLOW TEST:7.116 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":217,"skipped":3784,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:34:59.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Mar 27 00:34:59.292: INFO: Waiting up to 5m0s for pod "var-expansion-eeced71a-8d29-4fda-8bb6-811f168ac296" in namespace "var-expansion-4291" to be "Succeeded or Failed" Mar 27 00:34:59.307: INFO: Pod "var-expansion-eeced71a-8d29-4fda-8bb6-811f168ac296": Phase="Pending", Reason="", readiness=false. Elapsed: 15.063893ms Mar 27 00:35:01.311: INFO: Pod "var-expansion-eeced71a-8d29-4fda-8bb6-811f168ac296": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019451138s Mar 27 00:35:03.316: INFO: Pod "var-expansion-eeced71a-8d29-4fda-8bb6-811f168ac296": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023822789s STEP: Saw pod success Mar 27 00:35:03.316: INFO: Pod "var-expansion-eeced71a-8d29-4fda-8bb6-811f168ac296" satisfied condition "Succeeded or Failed" Mar 27 00:35:03.319: INFO: Trying to get logs from node latest-worker2 pod var-expansion-eeced71a-8d29-4fda-8bb6-811f168ac296 container dapi-container: STEP: delete the pod Mar 27 00:35:03.350: INFO: Waiting for pod var-expansion-eeced71a-8d29-4fda-8bb6-811f168ac296 to disappear Mar 27 00:35:03.354: INFO: Pod var-expansion-eeced71a-8d29-4fda-8bb6-811f168ac296 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:35:03.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4291" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3791,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:35:03.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-c200e283-cb90-4c22-85c1-d5ac0bdc9ee3 STEP: Creating a pod to test consume configMaps Mar 27 00:35:03.422: INFO: Waiting up to 5m0s for pod "pod-configmaps-790ebebf-b49c-4c63-a42a-c76825d724a7" in namespace "configmap-9737" to be "Succeeded or Failed" Mar 27 00:35:03.473: INFO: Pod "pod-configmaps-790ebebf-b49c-4c63-a42a-c76825d724a7": Phase="Pending", Reason="", readiness=false. Elapsed: 51.041154ms Mar 27 00:35:05.487: INFO: Pod "pod-configmaps-790ebebf-b49c-4c63-a42a-c76825d724a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064777573s Mar 27 00:35:07.491: INFO: Pod "pod-configmaps-790ebebf-b49c-4c63-a42a-c76825d724a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068977188s STEP: Saw pod success Mar 27 00:35:07.491: INFO: Pod "pod-configmaps-790ebebf-b49c-4c63-a42a-c76825d724a7" satisfied condition "Succeeded or Failed" Mar 27 00:35:07.494: INFO: Trying to get logs from node latest-worker pod pod-configmaps-790ebebf-b49c-4c63-a42a-c76825d724a7 container configmap-volume-test: STEP: delete the pod Mar 27 00:35:07.512: INFO: Waiting for pod pod-configmaps-790ebebf-b49c-4c63-a42a-c76825d724a7 to disappear Mar 27 00:35:07.517: INFO: Pod pod-configmaps-790ebebf-b49c-4c63-a42a-c76825d724a7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:35:07.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9737" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3797,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:35:07.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:35:12.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1050" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":220,"skipped":3799,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:35:12.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9131.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9131.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9131.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9131.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9131.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9131.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9131.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9131.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9131.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9131.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 224.61.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.61.224_udp@PTR;check="$$(dig +tcp +noall +answer +search 224.61.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.61.224_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9131.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9131.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9131.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9131.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9131.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9131.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9131.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9131.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9131.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9131.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9131.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 224.61.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.61.224_udp@PTR;check="$$(dig +tcp +noall +answer +search 224.61.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.61.224_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 27 00:35:18.472: INFO: Unable to read wheezy_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:18.476: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:18.479: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:18.482: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:18.506: INFO: Unable to read jessie_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:18.509: INFO: Unable to read jessie_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:18.512: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:18.515: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:18.534: INFO: Lookups using dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f failed for: [wheezy_udp@dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_udp@dns-test-service.dns-9131.svc.cluster.local jessie_tcp@dns-test-service.dns-9131.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local] Mar 27 00:35:23.539: INFO: Unable to read wheezy_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:23.542: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:23.546: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:23.550: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:23.573: INFO: Unable to read jessie_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:23.576: INFO: Unable to read jessie_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:23.579: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:23.583: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:23.602: INFO: Lookups using dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f failed for: [wheezy_udp@dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_udp@dns-test-service.dns-9131.svc.cluster.local jessie_tcp@dns-test-service.dns-9131.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local] Mar 27 00:35:28.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:28.544: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:28.547: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:28.550: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:28.571: INFO: Unable to read jessie_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:28.574: INFO: Unable to read jessie_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:28.577: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:28.580: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:28.601: INFO: Lookups using dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f failed for: [wheezy_udp@dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_udp@dns-test-service.dns-9131.svc.cluster.local jessie_tcp@dns-test-service.dns-9131.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local] Mar 27 00:35:33.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:33.544: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:33.548: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:33.552: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:33.576: INFO: Unable to read jessie_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:33.580: INFO: Unable to read jessie_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:33.583: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:33.586: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:33.607: INFO: Lookups using dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f failed for: [wheezy_udp@dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_udp@dns-test-service.dns-9131.svc.cluster.local jessie_tcp@dns-test-service.dns-9131.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local] Mar 27 00:35:38.539: INFO: Unable to read wheezy_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:38.543: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:38.547: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:38.551: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:38.574: INFO: Unable to read jessie_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:38.578: INFO: Unable to read jessie_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:38.581: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:38.584: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:38.603: INFO: Lookups using dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f failed for: [wheezy_udp@dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_udp@dns-test-service.dns-9131.svc.cluster.local jessie_tcp@dns-test-service.dns-9131.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local] Mar 27 00:35:43.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:43.544: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:43.547: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:43.551: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:43.576: INFO: Unable to read jessie_udp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:43.579: INFO: Unable to read jessie_tcp@dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:43.582: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:43.585: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local from pod dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f: the server could not find the requested resource (get pods dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f) Mar 27 00:35:43.605: INFO: Lookups using dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f failed for: [wheezy_udp@dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@dns-test-service.dns-9131.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_udp@dns-test-service.dns-9131.svc.cluster.local jessie_tcp@dns-test-service.dns-9131.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9131.svc.cluster.local] Mar 27 00:35:48.606: INFO: DNS probes using dns-9131/dns-test-ae3b67bf-08ea-4bf8-804c-62a68b99305f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:35:49.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9131" for this suite. • [SLOW TEST:36.906 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":221,"skipped":3824,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:35:49.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:36:49.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1486" for this suite. • [SLOW TEST:60.103 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3832,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:36:49.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:36:49.926: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:36:51.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866210, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866210, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866210, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866209, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:36:54.951: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:36:54.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4905-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:36:56.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5246" for this suite. STEP: Destroying namespace "webhook-5246-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.850 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":223,"skipped":3858,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:36:56.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-3ad43481-dfc2-4af0-ae9c-d5b3fc6a7682 STEP: Creating a pod to test consume secrets Mar 27 00:36:56.285: INFO: Waiting up to 5m0s for pod "pod-secrets-2931df50-885c-434c-9800-7eec54e4cf46" in namespace "secrets-1137" to be "Succeeded or Failed" Mar 27 00:36:56.397: INFO: Pod "pod-secrets-2931df50-885c-434c-9800-7eec54e4cf46": Phase="Pending", Reason="", readiness=false. Elapsed: 112.48067ms Mar 27 00:36:58.401: INFO: Pod "pod-secrets-2931df50-885c-434c-9800-7eec54e4cf46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116468008s Mar 27 00:37:00.406: INFO: Pod "pod-secrets-2931df50-885c-434c-9800-7eec54e4cf46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120919687s STEP: Saw pod success Mar 27 00:37:00.406: INFO: Pod "pod-secrets-2931df50-885c-434c-9800-7eec54e4cf46" satisfied condition "Succeeded or Failed" Mar 27 00:37:00.409: INFO: Trying to get logs from node latest-worker pod pod-secrets-2931df50-885c-434c-9800-7eec54e4cf46 container secret-volume-test: STEP: delete the pod Mar 27 00:37:00.466: INFO: Waiting for pod pod-secrets-2931df50-885c-434c-9800-7eec54e4cf46 to disappear Mar 27 00:37:00.518: INFO: Pod pod-secrets-2931df50-885c-434c-9800-7eec54e4cf46 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:37:00.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1137" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3859,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:37:00.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:37:01.385: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:37:03.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866221, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866221, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866221, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866221, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:37:06.438: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:37:06.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7041" for this suite. STEP: Destroying namespace "webhook-7041-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.139 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":225,"skipped":3861,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:37:06.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-831754d2-8984-4483-a777-999268b4177d STEP: Creating a pod to test consume secrets Mar 27 00:37:06.815: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ece4ed04-9ccb-46a8-9bc0-35626f0e78b1" in namespace "projected-6389" to be "Succeeded or Failed" Mar 27 00:37:06.819: INFO: Pod "pod-projected-secrets-ece4ed04-9ccb-46a8-9bc0-35626f0e78b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.725407ms Mar 27 00:37:08.823: INFO: Pod "pod-projected-secrets-ece4ed04-9ccb-46a8-9bc0-35626f0e78b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007424575s Mar 27 00:37:10.834: INFO: Pod "pod-projected-secrets-ece4ed04-9ccb-46a8-9bc0-35626f0e78b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018867451s STEP: Saw pod success Mar 27 00:37:10.834: INFO: Pod "pod-projected-secrets-ece4ed04-9ccb-46a8-9bc0-35626f0e78b1" satisfied condition "Succeeded or Failed" Mar 27 00:37:10.836: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-ece4ed04-9ccb-46a8-9bc0-35626f0e78b1 container projected-secret-volume-test: STEP: delete the pod Mar 27 00:37:10.880: INFO: Waiting for pod pod-projected-secrets-ece4ed04-9ccb-46a8-9bc0-35626f0e78b1 to disappear Mar 27 00:37:10.902: INFO: Pod pod-projected-secrets-ece4ed04-9ccb-46a8-9bc0-35626f0e78b1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:37:10.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6389" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3882,"failed":0} SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:37:10.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:37:10.959: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:37:14.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9234" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3884,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:37:15.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 27 00:37:15.085: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2853 /api/v1/namespaces/watch-2853/configmaps/e2e-watch-test-label-changed fcf2b2c2-0ba3-4235-966c-c891426967e3 3084908 0 2020-03-27 00:37:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:37:15.085: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2853 /api/v1/namespaces/watch-2853/configmaps/e2e-watch-test-label-changed fcf2b2c2-0ba3-4235-966c-c891426967e3 3084909 0 2020-03-27 00:37:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:37:15.085: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2853 /api/v1/namespaces/watch-2853/configmaps/e2e-watch-test-label-changed fcf2b2c2-0ba3-4235-966c-c891426967e3 3084910 0 2020-03-27 00:37:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 27 00:37:25.115: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2853 /api/v1/namespaces/watch-2853/configmaps/e2e-watch-test-label-changed fcf2b2c2-0ba3-4235-966c-c891426967e3 3084961 0 2020-03-27 00:37:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:37:25.115: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2853 /api/v1/namespaces/watch-2853/configmaps/e2e-watch-test-label-changed fcf2b2c2-0ba3-4235-966c-c891426967e3 3084962 0 2020-03-27 00:37:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:37:25.115: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2853 /api/v1/namespaces/watch-2853/configmaps/e2e-watch-test-label-changed fcf2b2c2-0ba3-4235-966c-c891426967e3 3084963 0 2020-03-27 00:37:15 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:37:25.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2853" for this suite. • [SLOW TEST:10.156 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":228,"skipped":3894,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:37:25.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:37:25.186: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:37:25.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-855" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":229,"skipped":3915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:37:25.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-3626 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 27 00:37:25.865: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 27 00:37:25.948: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 27 00:37:27.952: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 27 00:37:29.952: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:37:31.951: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:37:33.952: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:37:35.951: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:37:37.951: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:37:39.951: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:37:41.952: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 27 00:37:41.958: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 27 00:37:45.992: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=udp&host=10.244.2.192&port=8081&tries=1'] Namespace:pod-network-test-3626 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:37:45.992: INFO: >>> kubeConfig: /root/.kube/config I0327 00:37:46.027704 7 log.go:172] (0xc002c0a630) (0xc00032d540) Create stream I0327 00:37:46.027736 7 log.go:172] (0xc002c0a630) (0xc00032d540) Stream added, broadcasting: 1 I0327 00:37:46.030095 7 log.go:172] (0xc002c0a630) Reply frame received for 1 I0327 00:37:46.030139 7 log.go:172] (0xc002c0a630) (0xc0002760a0) Create stream I0327 00:37:46.030155 7 log.go:172] (0xc002c0a630) (0xc0002760a0) Stream added, broadcasting: 3 I0327 00:37:46.031317 7 log.go:172] (0xc002c0a630) Reply frame received for 3 I0327 00:37:46.031371 7 log.go:172] (0xc002c0a630) (0xc00032d680) Create stream I0327 00:37:46.031388 7 log.go:172] (0xc002c0a630) (0xc00032d680) Stream added, broadcasting: 5 I0327 00:37:46.032611 7 log.go:172] (0xc002c0a630) Reply frame received for 5 I0327 00:37:46.131330 7 log.go:172] (0xc002c0a630) Data frame received for 3 I0327 00:37:46.131385 7 log.go:172] (0xc0002760a0) (3) Data frame handling I0327 00:37:46.131425 7 log.go:172] (0xc0002760a0) (3) Data frame sent I0327 00:37:46.131676 7 log.go:172] (0xc002c0a630) Data frame received for 3 I0327 00:37:46.131704 7 log.go:172] (0xc0002760a0) (3) Data frame handling I0327 00:37:46.132266 7 log.go:172] (0xc002c0a630) Data frame received for 5 I0327 00:37:46.132290 7 log.go:172] (0xc00032d680) (5) Data frame handling I0327 00:37:46.133309 7 log.go:172] (0xc002c0a630) Data frame received for 1 I0327 00:37:46.133332 7 log.go:172] (0xc00032d540) (1) Data frame handling I0327 00:37:46.133348 7 log.go:172] (0xc00032d540) (1) Data frame sent I0327 00:37:46.133550 7 log.go:172] (0xc002c0a630) (0xc00032d540) Stream removed, broadcasting: 1 I0327 00:37:46.133586 7 log.go:172] (0xc002c0a630) Go away received I0327 00:37:46.133640 7 log.go:172] (0xc002c0a630) (0xc00032d540) Stream removed, broadcasting: 1 I0327 00:37:46.133664 7 log.go:172] (0xc002c0a630) (0xc0002760a0) Stream removed, broadcasting: 3 I0327 00:37:46.133675 7 log.go:172] (0xc002c0a630) (0xc00032d680) Stream removed, broadcasting: 5 Mar 27 00:37:46.133: INFO: Waiting for responses: map[] Mar 27 00:37:46.156: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=udp&host=10.244.1.68&port=8081&tries=1'] Namespace:pod-network-test-3626 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:37:46.156: INFO: >>> kubeConfig: /root/.kube/config I0327 00:37:46.211784 7 log.go:172] (0xc003cb2420) (0xc000520f00) Create stream I0327 00:37:46.211818 7 log.go:172] (0xc003cb2420) (0xc000520f00) Stream added, broadcasting: 1 I0327 00:37:46.213888 7 log.go:172] (0xc003cb2420) Reply frame received for 1 I0327 00:37:46.213940 7 log.go:172] (0xc003cb2420) (0xc000b77a40) Create stream I0327 00:37:46.213957 7 log.go:172] (0xc003cb2420) (0xc000b77a40) Stream added, broadcasting: 3 I0327 00:37:46.215155 7 log.go:172] (0xc003cb2420) Reply frame received for 3 I0327 00:37:46.215199 7 log.go:172] (0xc003cb2420) (0xc000521c20) Create stream I0327 00:37:46.215214 7 log.go:172] (0xc003cb2420) (0xc000521c20) Stream added, broadcasting: 5 I0327 00:37:46.216212 7 log.go:172] (0xc003cb2420) Reply frame received for 5 I0327 00:37:46.290360 7 log.go:172] (0xc003cb2420) Data frame received for 3 I0327 00:37:46.290391 7 log.go:172] (0xc000b77a40) (3) Data frame handling I0327 00:37:46.290406 7 log.go:172] (0xc000b77a40) (3) Data frame sent I0327 00:37:46.290612 7 log.go:172] (0xc003cb2420) Data frame received for 5 I0327 00:37:46.290627 7 log.go:172] (0xc000521c20) (5) Data frame handling I0327 00:37:46.290677 7 log.go:172] (0xc003cb2420) Data frame received for 3 I0327 00:37:46.290702 7 log.go:172] (0xc000b77a40) (3) Data frame handling I0327 00:37:46.292968 7 log.go:172] (0xc003cb2420) Data frame received for 1 I0327 00:37:46.292981 7 log.go:172] (0xc000520f00) (1) Data frame handling I0327 00:37:46.292987 7 log.go:172] (0xc000520f00) (1) Data frame sent I0327 00:37:46.292995 7 log.go:172] (0xc003cb2420) (0xc000520f00) Stream removed, broadcasting: 1 I0327 00:37:46.293002 7 log.go:172] (0xc003cb2420) Go away received I0327 00:37:46.293301 7 log.go:172] (0xc003cb2420) (0xc000520f00) Stream removed, broadcasting: 1 I0327 00:37:46.293324 7 log.go:172] (0xc003cb2420) (0xc000b77a40) Stream removed, broadcasting: 3 I0327 00:37:46.293339 7 log.go:172] (0xc003cb2420) (0xc000521c20) Stream removed, broadcasting: 5 Mar 27 00:37:46.293: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:37:46.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3626" for this suite. • [SLOW TEST:20.520 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3943,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:37:46.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:37:46.830: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:37:48.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866266, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866266, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866266, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866266, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:37:51.863: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:37:51.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3526-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:37:53.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4075" for this suite. STEP: Destroying namespace "webhook-4075-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.339 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":231,"skipped":3968,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:37:53.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 27 00:37:53.716: INFO: Waiting up to 5m0s for pod "pod-b5206c06-6b87-4ca4-96f2-08cbaf5f8ca4" in namespace "emptydir-6250" to be "Succeeded or Failed" Mar 27 00:37:53.730: INFO: Pod "pod-b5206c06-6b87-4ca4-96f2-08cbaf5f8ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.118224ms Mar 27 00:37:55.748: INFO: Pod "pod-b5206c06-6b87-4ca4-96f2-08cbaf5f8ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032202064s Mar 27 00:37:57.752: INFO: Pod "pod-b5206c06-6b87-4ca4-96f2-08cbaf5f8ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036235857s STEP: Saw pod success Mar 27 00:37:57.752: INFO: Pod "pod-b5206c06-6b87-4ca4-96f2-08cbaf5f8ca4" satisfied condition "Succeeded or Failed" Mar 27 00:37:57.755: INFO: Trying to get logs from node latest-worker pod pod-b5206c06-6b87-4ca4-96f2-08cbaf5f8ca4 container test-container: STEP: delete the pod Mar 27 00:37:57.767: INFO: Waiting for pod pod-b5206c06-6b87-4ca4-96f2-08cbaf5f8ca4 to disappear Mar 27 00:37:57.771: INFO: Pod pod-b5206c06-6b87-4ca4-96f2-08cbaf5f8ca4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:37:57.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6250" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:37:57.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:37:57.833: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-21b455ac-b73c-4791-9091-828cf806bef3" in namespace "security-context-test-1076" to be "Succeeded or Failed" Mar 27 00:37:57.837: INFO: Pod "busybox-privileged-false-21b455ac-b73c-4791-9091-828cf806bef3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366586ms Mar 27 00:37:59.853: INFO: Pod "busybox-privileged-false-21b455ac-b73c-4791-9091-828cf806bef3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019872402s Mar 27 00:38:01.857: INFO: Pod "busybox-privileged-false-21b455ac-b73c-4791-9091-828cf806bef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024430267s Mar 27 00:38:01.858: INFO: Pod "busybox-privileged-false-21b455ac-b73c-4791-9091-828cf806bef3" satisfied condition "Succeeded or Failed" Mar 27 00:38:01.864: INFO: Got logs for pod "busybox-privileged-false-21b455ac-b73c-4791-9091-828cf806bef3": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:38:01.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1076" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":4017,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:38:01.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-9197e9a7-b21b-4dce-b297-99f4ef5cd03c STEP: Creating a pod to test consume configMaps Mar 27 00:38:01.985: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0c9fd9f-ca66-4cc5-b7ff-0c40cba94fe7" in namespace "configmap-7733" to be "Succeeded or Failed" Mar 27 00:38:02.015: INFO: Pod "pod-configmaps-c0c9fd9f-ca66-4cc5-b7ff-0c40cba94fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.77846ms Mar 27 00:38:04.019: INFO: Pod "pod-configmaps-c0c9fd9f-ca66-4cc5-b7ff-0c40cba94fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034203659s Mar 27 00:38:06.023: INFO: Pod "pod-configmaps-c0c9fd9f-ca66-4cc5-b7ff-0c40cba94fe7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038438011s STEP: Saw pod success Mar 27 00:38:06.023: INFO: Pod "pod-configmaps-c0c9fd9f-ca66-4cc5-b7ff-0c40cba94fe7" satisfied condition "Succeeded or Failed" Mar 27 00:38:06.026: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c0c9fd9f-ca66-4cc5-b7ff-0c40cba94fe7 container configmap-volume-test: STEP: delete the pod Mar 27 00:38:06.050: INFO: Waiting for pod pod-configmaps-c0c9fd9f-ca66-4cc5-b7ff-0c40cba94fe7 to disappear Mar 27 00:38:06.054: INFO: Pod pod-configmaps-c0c9fd9f-ca66-4cc5-b7ff-0c40cba94fe7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:38:06.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7733" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":4023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:38:06.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2725 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 27 00:38:06.116: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 27 00:38:06.198: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 27 00:38:08.202: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 27 00:38:10.202: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:38:12.202: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:38:14.202: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:38:16.203: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:38:18.202: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:38:20.202: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:38:22.202: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 27 00:38:24.202: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 27 00:38:24.209: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 27 00:38:28.264: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.195:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2725 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:38:28.264: INFO: >>> kubeConfig: /root/.kube/config I0327 00:38:28.304548 7 log.go:172] (0xc0028b1760) (0xc002831680) Create stream I0327 00:38:28.304579 7 log.go:172] (0xc0028b1760) (0xc002831680) Stream added, broadcasting: 1 I0327 00:38:28.306518 7 log.go:172] (0xc0028b1760) Reply frame received for 1 I0327 00:38:28.306570 7 log.go:172] (0xc0028b1760) (0xc001496280) Create stream I0327 00:38:28.306587 7 log.go:172] (0xc0028b1760) (0xc001496280) Stream added, broadcasting: 3 I0327 00:38:28.307648 7 log.go:172] (0xc0028b1760) Reply frame received for 3 I0327 00:38:28.307689 7 log.go:172] (0xc0028b1760) (0xc00117d0e0) Create stream I0327 00:38:28.307705 7 log.go:172] (0xc0028b1760) (0xc00117d0e0) Stream added, broadcasting: 5 I0327 00:38:28.308746 7 log.go:172] (0xc0028b1760) Reply frame received for 5 I0327 00:38:28.376529 7 log.go:172] (0xc0028b1760) Data frame received for 3 I0327 00:38:28.376655 7 log.go:172] (0xc001496280) (3) Data frame handling I0327 00:38:28.376692 7 log.go:172] (0xc001496280) (3) Data frame sent I0327 00:38:28.376706 7 log.go:172] (0xc0028b1760) Data frame received for 3 I0327 00:38:28.376721 7 log.go:172] (0xc001496280) (3) Data frame handling I0327 00:38:28.376776 7 log.go:172] (0xc0028b1760) Data frame received for 5 I0327 00:38:28.376804 7 log.go:172] (0xc00117d0e0) (5) Data frame handling I0327 00:38:28.379223 7 log.go:172] (0xc0028b1760) Data frame received for 1 I0327 00:38:28.379255 7 log.go:172] (0xc002831680) (1) Data frame handling I0327 00:38:28.379276 7 log.go:172] (0xc002831680) (1) Data frame sent I0327 00:38:28.379296 7 log.go:172] (0xc0028b1760) (0xc002831680) Stream removed, broadcasting: 1 I0327 00:38:28.379320 7 log.go:172] (0xc0028b1760) Go away received I0327 00:38:28.379491 7 log.go:172] (0xc0028b1760) (0xc002831680) Stream removed, broadcasting: 1 I0327 00:38:28.379513 7 log.go:172] (0xc0028b1760) (0xc001496280) Stream removed, broadcasting: 3 I0327 00:38:28.379523 7 log.go:172] (0xc0028b1760) (0xc00117d0e0) Stream removed, broadcasting: 5 Mar 27 00:38:28.379: INFO: Found all expected endpoints: [netserver-0] Mar 27 00:38:28.383: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.72:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2725 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 27 00:38:28.383: INFO: >>> kubeConfig: /root/.kube/config I0327 00:38:28.414314 7 log.go:172] (0xc002cb48f0) (0xc000fd8140) Create stream I0327 00:38:28.414338 7 log.go:172] (0xc002cb48f0) (0xc000fd8140) Stream added, broadcasting: 1 I0327 00:38:28.416150 7 log.go:172] (0xc002cb48f0) Reply frame received for 1 I0327 00:38:28.416199 7 log.go:172] (0xc002cb48f0) (0xc000fd85a0) Create stream I0327 00:38:28.416217 7 log.go:172] (0xc002cb48f0) (0xc000fd85a0) Stream added, broadcasting: 3 I0327 00:38:28.417698 7 log.go:172] (0xc002cb48f0) Reply frame received for 3 I0327 00:38:28.417753 7 log.go:172] (0xc002cb48f0) (0xc000fd86e0) Create stream I0327 00:38:28.417771 7 log.go:172] (0xc002cb48f0) (0xc000fd86e0) Stream added, broadcasting: 5 I0327 00:38:28.419061 7 log.go:172] (0xc002cb48f0) Reply frame received for 5 I0327 00:38:28.499870 7 log.go:172] (0xc002cb48f0) Data frame received for 3 I0327 00:38:28.499925 7 log.go:172] (0xc000fd85a0) (3) Data frame handling I0327 00:38:28.499965 7 log.go:172] (0xc000fd85a0) (3) Data frame sent I0327 00:38:28.500219 7 log.go:172] (0xc002cb48f0) Data frame received for 5 I0327 00:38:28.500251 7 log.go:172] (0xc000fd86e0) (5) Data frame handling I0327 00:38:28.500282 7 log.go:172] (0xc002cb48f0) Data frame received for 3 I0327 00:38:28.500295 7 log.go:172] (0xc000fd85a0) (3) Data frame handling I0327 00:38:28.502538 7 log.go:172] (0xc002cb48f0) Data frame received for 1 I0327 00:38:28.502579 7 log.go:172] (0xc000fd8140) (1) Data frame handling I0327 00:38:28.502609 7 log.go:172] (0xc000fd8140) (1) Data frame sent I0327 00:38:28.502632 7 log.go:172] (0xc002cb48f0) (0xc000fd8140) Stream removed, broadcasting: 1 I0327 00:38:28.502699 7 log.go:172] (0xc002cb48f0) (0xc000fd8140) Stream removed, broadcasting: 1 I0327 00:38:28.502721 7 log.go:172] (0xc002cb48f0) (0xc000fd85a0) Stream removed, broadcasting: 3 I0327 00:38:28.502743 7 log.go:172] (0xc002cb48f0) (0xc000fd86e0) Stream removed, broadcasting: 5 Mar 27 00:38:28.502: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:38:28.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0327 00:38:28.503108 7 log.go:172] (0xc002cb48f0) Go away received STEP: Destroying namespace "pod-network-test-2725" for this suite. • [SLOW TEST:22.449 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":4080,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:38:28.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Mar 27 00:38:28.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Mar 27 00:38:28.753: INFO: stderr: "" Mar 27 00:38:28.754: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:38:28.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6055" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":236,"skipped":4094,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:38:28.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-d6b32f52-e3de-4927-a7cc-3b010bab387e STEP: Creating a pod to test consume secrets Mar 27 00:38:28.860: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-75c49cde-9d0a-491a-8187-5ddc099e1acf" in namespace "projected-7208" to be "Succeeded or Failed" Mar 27 00:38:28.885: INFO: Pod "pod-projected-secrets-75c49cde-9d0a-491a-8187-5ddc099e1acf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.464595ms Mar 27 00:38:31.189: INFO: Pod "pod-projected-secrets-75c49cde-9d0a-491a-8187-5ddc099e1acf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329223689s Mar 27 00:38:33.193: INFO: Pod "pod-projected-secrets-75c49cde-9d0a-491a-8187-5ddc099e1acf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.333404659s STEP: Saw pod success Mar 27 00:38:33.193: INFO: Pod "pod-projected-secrets-75c49cde-9d0a-491a-8187-5ddc099e1acf" satisfied condition "Succeeded or Failed" Mar 27 00:38:33.196: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-75c49cde-9d0a-491a-8187-5ddc099e1acf container projected-secret-volume-test: STEP: delete the pod Mar 27 00:38:33.218: INFO: Waiting for pod pod-projected-secrets-75c49cde-9d0a-491a-8187-5ddc099e1acf to disappear Mar 27 00:38:33.222: INFO: Pod pod-projected-secrets-75c49cde-9d0a-491a-8187-5ddc099e1acf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:38:33.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7208" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4137,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:38:33.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 27 00:38:33.322: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 27 00:38:33.344: INFO: Waiting for terminating namespaces to be deleted... Mar 27 00:38:33.347: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 27 00:38:33.353: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:38:33.353: INFO: Container kindnet-cni ready: true, restart count 0 Mar 27 00:38:33.353: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:38:33.353: INFO: Container kube-proxy ready: true, restart count 0 Mar 27 00:38:33.353: INFO: netserver-0 from pod-network-test-2725 started at 2020-03-27 00:38:06 +0000 UTC (1 container statuses recorded) Mar 27 00:38:33.353: INFO: Container webserver ready: true, restart count 0 Mar 27 00:38:33.353: INFO: host-test-container-pod from pod-network-test-2725 started at 2020-03-27 00:38:24 +0000 UTC (1 container statuses recorded) Mar 27 00:38:33.353: INFO: Container agnhost ready: true, restart count 0 Mar 27 00:38:33.353: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 27 00:38:33.359: INFO: test-container-pod from pod-network-test-2725 started at 2020-03-27 00:38:24 +0000 UTC (1 container statuses recorded) Mar 27 00:38:33.359: INFO: Container webserver ready: true, restart count 0 Mar 27 00:38:33.359: INFO: netserver-1 from pod-network-test-2725 started at 2020-03-27 00:38:06 +0000 UTC (1 container statuses recorded) Mar 27 00:38:33.359: INFO: Container webserver ready: true, restart count 0 Mar 27 00:38:33.359: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:38:33.359: INFO: Container kindnet-cni ready: true, restart count 0 Mar 27 00:38:33.359: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 27 00:38:33.359: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-17a723bd-cbcd-4651-bcf4-88c77582d99d 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-17a723bd-cbcd-4651-bcf4-88c77582d99d off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-17a723bd-cbcd-4651-bcf4-88c77582d99d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:43:41.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6835" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.287 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":238,"skipped":4139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:43:41.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Mar 27 00:43:45.632: INFO: Pod pod-hostip-0f3ff96c-45ae-4df3-b5e7-241992625791 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:43:45.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8091" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4166,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:43:45.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 27 00:43:45.707: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-a f4056db1-b112-4459-95c7-c7af60c7020b 3086449 0 2020-03-27 00:43:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:43:45.707: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-a f4056db1-b112-4459-95c7-c7af60c7020b 3086449 0 2020-03-27 00:43:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 27 00:43:55.715: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-a f4056db1-b112-4459-95c7-c7af60c7020b 3086514 0 2020-03-27 00:43:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:43:55.715: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-a f4056db1-b112-4459-95c7-c7af60c7020b 3086514 0 2020-03-27 00:43:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 27 00:44:05.723: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-a f4056db1-b112-4459-95c7-c7af60c7020b 3086546 0 2020-03-27 00:43:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:44:05.723: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-a f4056db1-b112-4459-95c7-c7af60c7020b 3086546 0 2020-03-27 00:43:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 27 00:44:15.730: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-a f4056db1-b112-4459-95c7-c7af60c7020b 3086576 0 2020-03-27 00:43:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:44:15.730: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-a f4056db1-b112-4459-95c7-c7af60c7020b 3086576 0 2020-03-27 00:43:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 27 00:44:25.737: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-b 348d64f2-05b7-4231-9d84-cf3ffb93db9f 3086604 0 2020-03-27 00:44:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:44:25.737: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-b 348d64f2-05b7-4231-9d84-cf3ffb93db9f 3086604 0 2020-03-27 00:44:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 27 00:44:35.742: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-b 348d64f2-05b7-4231-9d84-cf3ffb93db9f 3086635 0 2020-03-27 00:44:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:44:35.742: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3310 /api/v1/namespaces/watch-3310/configmaps/e2e-watch-test-configmap-b 348d64f2-05b7-4231-9d84-cf3ffb93db9f 3086635 0 2020-03-27 00:44:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:44:45.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3310" for this suite. • [SLOW TEST:60.112 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":240,"skipped":4171,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:44:45.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-2198/configmap-test-0b47ac56-b676-4997-a0a0-855d47abde12 STEP: Creating a pod to test consume configMaps Mar 27 00:44:45.830: INFO: Waiting up to 5m0s for pod "pod-configmaps-959ebcd5-9def-4ded-bd70-58dd73752e2a" in namespace "configmap-2198" to be "Succeeded or Failed" Mar 27 00:44:45.853: INFO: Pod "pod-configmaps-959ebcd5-9def-4ded-bd70-58dd73752e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.938481ms Mar 27 00:44:47.856: INFO: Pod "pod-configmaps-959ebcd5-9def-4ded-bd70-58dd73752e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026337499s Mar 27 00:44:49.860: INFO: Pod "pod-configmaps-959ebcd5-9def-4ded-bd70-58dd73752e2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030102513s STEP: Saw pod success Mar 27 00:44:49.860: INFO: Pod "pod-configmaps-959ebcd5-9def-4ded-bd70-58dd73752e2a" satisfied condition "Succeeded or Failed" Mar 27 00:44:49.863: INFO: Trying to get logs from node latest-worker pod pod-configmaps-959ebcd5-9def-4ded-bd70-58dd73752e2a container env-test: STEP: delete the pod Mar 27 00:44:49.902: INFO: Waiting for pod pod-configmaps-959ebcd5-9def-4ded-bd70-58dd73752e2a to disappear Mar 27 00:44:49.906: INFO: Pod pod-configmaps-959ebcd5-9def-4ded-bd70-58dd73752e2a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:44:49.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2198" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4172,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:44:49.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 27 00:44:50.011: INFO: Waiting up to 5m0s for pod "downward-api-88615ea2-4063-4759-accb-9f4bac346e0d" in namespace "downward-api-4403" to be "Succeeded or Failed" Mar 27 00:44:50.014: INFO: Pod "downward-api-88615ea2-4063-4759-accb-9f4bac346e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.429405ms Mar 27 00:44:52.018: INFO: Pod "downward-api-88615ea2-4063-4759-accb-9f4bac346e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00704549s Mar 27 00:44:54.026: INFO: Pod "downward-api-88615ea2-4063-4759-accb-9f4bac346e0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015545708s STEP: Saw pod success Mar 27 00:44:54.026: INFO: Pod "downward-api-88615ea2-4063-4759-accb-9f4bac346e0d" satisfied condition "Succeeded or Failed" Mar 27 00:44:54.029: INFO: Trying to get logs from node latest-worker pod downward-api-88615ea2-4063-4759-accb-9f4bac346e0d container dapi-container: STEP: delete the pod Mar 27 00:44:54.048: INFO: Waiting for pod downward-api-88615ea2-4063-4759-accb-9f4bac346e0d to disappear Mar 27 00:44:54.052: INFO: Pod downward-api-88615ea2-4063-4759-accb-9f4bac346e0d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:44:54.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4403" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4180,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:44:54.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Mar 27 00:44:54.146: INFO: Waiting up to 5m0s for pod "client-containers-a994aab3-ff9c-455a-ab4c-b33c6855c7b7" in namespace "containers-3515" to be "Succeeded or Failed" Mar 27 00:44:54.148: INFO: Pod "client-containers-a994aab3-ff9c-455a-ab4c-b33c6855c7b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478283ms Mar 27 00:44:56.164: INFO: Pod "client-containers-a994aab3-ff9c-455a-ab4c-b33c6855c7b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017714761s Mar 27 00:44:58.167: INFO: Pod "client-containers-a994aab3-ff9c-455a-ab4c-b33c6855c7b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021203913s STEP: Saw pod success Mar 27 00:44:58.167: INFO: Pod "client-containers-a994aab3-ff9c-455a-ab4c-b33c6855c7b7" satisfied condition "Succeeded or Failed" Mar 27 00:44:58.169: INFO: Trying to get logs from node latest-worker pod client-containers-a994aab3-ff9c-455a-ab4c-b33c6855c7b7 container test-container: STEP: delete the pod Mar 27 00:44:58.186: INFO: Waiting for pod client-containers-a994aab3-ff9c-455a-ab4c-b33c6855c7b7 to disappear Mar 27 00:44:58.190: INFO: Pod client-containers-a994aab3-ff9c-455a-ab4c-b33c6855c7b7 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:44:58.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3515" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:44:58.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 27 00:44:58.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3170' Mar 27 00:45:00.909: INFO: stderr: "" Mar 27 00:45:00.909: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 27 00:45:01.913: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:45:01.914: INFO: Found 0 / 1 Mar 27 00:45:02.914: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:45:02.914: INFO: Found 0 / 1 Mar 27 00:45:03.913: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:45:03.913: INFO: Found 1 / 1 Mar 27 00:45:03.913: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 27 00:45:03.916: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:45:03.916: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 27 00:45:03.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-5nwkf --namespace=kubectl-3170 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 27 00:45:04.016: INFO: stderr: "" Mar 27 00:45:04.016: INFO: stdout: "pod/agnhost-master-5nwkf patched\n" STEP: checking annotations Mar 27 00:45:04.033: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:45:04.033: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:45:04.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3170" for this suite. • [SLOW TEST:5.841 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":244,"skipped":4276,"failed":0} S ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:45:04.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:45:04.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3409" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":245,"skipped":4277,"failed":0} ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:45:04.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 27 00:45:04.330: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-558 /api/v1/namespaces/watch-558/configmaps/e2e-watch-test-resource-version cd8896bb-4910-4832-99fe-82ebe83009a2 3086832 0 2020-03-27 00:45:04 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 27 00:45:04.330: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-558 /api/v1/namespaces/watch-558/configmaps/e2e-watch-test-resource-version cd8896bb-4910-4832-99fe-82ebe83009a2 3086833 0 2020-03-27 00:45:04 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:45:04.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-558" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":246,"skipped":4277,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:45:04.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:45:05.255: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:45:07.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866705, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866705, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866705, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866705, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:45:10.320: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:45:10.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9844" for this suite. STEP: Destroying namespace "webhook-9844-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.070 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":247,"skipped":4282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:45:10.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 27 00:45:10.523: INFO: Waiting up to 5m0s for pod "pod-cf4379e6-5446-421f-95fa-a68a0e2dc16d" in namespace "emptydir-3143" to be "Succeeded or Failed" Mar 27 00:45:10.538: INFO: Pod "pod-cf4379e6-5446-421f-95fa-a68a0e2dc16d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.692214ms Mar 27 00:45:12.542: INFO: Pod "pod-cf4379e6-5446-421f-95fa-a68a0e2dc16d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018828194s Mar 27 00:45:14.546: INFO: Pod "pod-cf4379e6-5446-421f-95fa-a68a0e2dc16d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02294623s STEP: Saw pod success Mar 27 00:45:14.546: INFO: Pod "pod-cf4379e6-5446-421f-95fa-a68a0e2dc16d" satisfied condition "Succeeded or Failed" Mar 27 00:45:14.549: INFO: Trying to get logs from node latest-worker pod pod-cf4379e6-5446-421f-95fa-a68a0e2dc16d container test-container: STEP: delete the pod Mar 27 00:45:14.603: INFO: Waiting for pod pod-cf4379e6-5446-421f-95fa-a68a0e2dc16d to disappear Mar 27 00:45:14.610: INFO: Pod pod-cf4379e6-5446-421f-95fa-a68a0e2dc16d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:45:14.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3143" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4310,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:45:14.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Mar 27 00:45:14.661: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 27 00:45:14.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-410' Mar 27 00:45:14.932: INFO: stderr: "" Mar 27 00:45:14.932: INFO: stdout: "service/agnhost-slave created\n" Mar 27 00:45:14.933: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 27 00:45:14.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-410' Mar 27 00:45:15.164: INFO: stderr: "" Mar 27 00:45:15.164: INFO: stdout: "service/agnhost-master created\n" Mar 27 00:45:15.164: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 27 00:45:15.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-410' Mar 27 00:45:15.413: INFO: stderr: "" Mar 27 00:45:15.413: INFO: stdout: "service/frontend created\n" Mar 27 00:45:15.413: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 27 00:45:15.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-410' Mar 27 00:45:15.663: INFO: stderr: "" Mar 27 00:45:15.663: INFO: stdout: "deployment.apps/frontend created\n" Mar 27 00:45:15.663: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 27 00:45:15.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-410' Mar 27 00:45:15.953: INFO: stderr: "" Mar 27 00:45:15.953: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 27 00:45:15.953: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 27 00:45:15.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-410' Mar 27 00:45:16.210: INFO: stderr: "" Mar 27 00:45:16.210: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 27 00:45:16.210: INFO: Waiting for all frontend pods to be Running. Mar 27 00:45:26.260: INFO: Waiting for frontend to serve content. Mar 27 00:45:26.270: INFO: Trying to add a new entry to the guestbook. Mar 27 00:45:26.282: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 27 00:45:26.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-410' Mar 27 00:45:26.449: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 27 00:45:26.449: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 27 00:45:26.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-410' Mar 27 00:45:26.626: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 27 00:45:26.626: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 27 00:45:26.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-410' Mar 27 00:45:26.798: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 27 00:45:26.798: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 27 00:45:26.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-410' Mar 27 00:45:26.890: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 27 00:45:26.890: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 27 00:45:26.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-410' Mar 27 00:45:26.982: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 27 00:45:26.982: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 27 00:45:26.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-410' Mar 27 00:45:27.092: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 27 00:45:27.092: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:45:27.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-410" for this suite. • [SLOW TEST:12.483 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":249,"skipped":4331,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:45:27.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-644g STEP: Creating a pod to test atomic-volume-subpath Mar 27 00:45:27.277: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-644g" in namespace "subpath-3976" to be "Succeeded or Failed" Mar 27 00:45:27.339: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Pending", Reason="", readiness=false. Elapsed: 61.810853ms Mar 27 00:45:29.374: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097267532s Mar 27 00:45:31.379: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101700243s Mar 27 00:45:33.383: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Running", Reason="", readiness=true. Elapsed: 6.105908151s Mar 27 00:45:35.387: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Running", Reason="", readiness=true. Elapsed: 8.110247372s Mar 27 00:45:37.391: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Running", Reason="", readiness=true. Elapsed: 10.114526296s Mar 27 00:45:39.396: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Running", Reason="", readiness=true. Elapsed: 12.11903549s Mar 27 00:45:41.400: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Running", Reason="", readiness=true. Elapsed: 14.123239165s Mar 27 00:45:43.410: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Running", Reason="", readiness=true. Elapsed: 16.132794904s Mar 27 00:45:45.414: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Running", Reason="", readiness=true. Elapsed: 18.136943067s Mar 27 00:45:47.418: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Running", Reason="", readiness=true. Elapsed: 20.140775552s Mar 27 00:45:49.422: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Running", Reason="", readiness=true. Elapsed: 22.145577981s Mar 27 00:45:51.427: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Running", Reason="", readiness=true. Elapsed: 24.15003794s Mar 27 00:45:53.430: INFO: Pod "pod-subpath-test-configmap-644g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.153044503s STEP: Saw pod success Mar 27 00:45:53.430: INFO: Pod "pod-subpath-test-configmap-644g" satisfied condition "Succeeded or Failed" Mar 27 00:45:53.433: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-644g container test-container-subpath-configmap-644g: STEP: delete the pod Mar 27 00:45:53.449: INFO: Waiting for pod pod-subpath-test-configmap-644g to disappear Mar 27 00:45:53.454: INFO: Pod pod-subpath-test-configmap-644g no longer exists STEP: Deleting pod pod-subpath-test-configmap-644g Mar 27 00:45:53.454: INFO: Deleting pod "pod-subpath-test-configmap-644g" in namespace "subpath-3976" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:45:53.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3976" for this suite. • [SLOW TEST:26.364 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":250,"skipped":4337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:45:53.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 27 00:45:53.531: INFO: Waiting up to 5m0s for pod "pod-4989905a-27e7-42bd-b9b1-24c539e3a5fa" in namespace "emptydir-7380" to be "Succeeded or Failed" Mar 27 00:45:53.538: INFO: Pod "pod-4989905a-27e7-42bd-b9b1-24c539e3a5fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.585087ms Mar 27 00:45:55.544: INFO: Pod "pod-4989905a-27e7-42bd-b9b1-24c539e3a5fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01262917s Mar 27 00:45:57.548: INFO: Pod "pod-4989905a-27e7-42bd-b9b1-24c539e3a5fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016604014s STEP: Saw pod success Mar 27 00:45:57.548: INFO: Pod "pod-4989905a-27e7-42bd-b9b1-24c539e3a5fa" satisfied condition "Succeeded or Failed" Mar 27 00:45:57.551: INFO: Trying to get logs from node latest-worker2 pod pod-4989905a-27e7-42bd-b9b1-24c539e3a5fa container test-container: STEP: delete the pod Mar 27 00:45:57.599: INFO: Waiting for pod pod-4989905a-27e7-42bd-b9b1-24c539e3a5fa to disappear Mar 27 00:45:57.603: INFO: Pod pod-4989905a-27e7-42bd-b9b1-24c539e3a5fa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:45:57.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7380" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4369,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:45:57.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-55 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-55 STEP: creating replication controller externalsvc in namespace services-55 I0327 00:45:57.787228 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-55, replica count: 2 I0327 00:46:00.837712 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0327 00:46:03.837981 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 27 00:46:03.876: INFO: Creating new exec pod Mar 27 00:46:07.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-55 execpodcpbb2 -- /bin/sh -x -c nslookup clusterip-service' Mar 27 00:46:08.151: INFO: stderr: "I0327 00:46:08.069848 3159 log.go:172] (0xc000c080b0) (0xc0006c9540) Create stream\nI0327 00:46:08.069921 3159 log.go:172] (0xc000c080b0) (0xc0006c9540) Stream added, broadcasting: 1\nI0327 00:46:08.072489 3159 log.go:172] (0xc000c080b0) Reply frame received for 1\nI0327 00:46:08.072538 3159 log.go:172] (0xc000c080b0) (0xc000922000) Create stream\nI0327 00:46:08.072566 3159 log.go:172] (0xc000c080b0) (0xc000922000) Stream added, broadcasting: 3\nI0327 00:46:08.073628 3159 log.go:172] (0xc000c080b0) Reply frame received for 3\nI0327 00:46:08.073667 3159 log.go:172] (0xc000c080b0) (0xc00098a000) Create stream\nI0327 00:46:08.073677 3159 log.go:172] (0xc000c080b0) (0xc00098a000) Stream added, broadcasting: 5\nI0327 00:46:08.074508 3159 log.go:172] (0xc000c080b0) Reply frame received for 5\nI0327 00:46:08.132531 3159 log.go:172] (0xc000c080b0) Data frame received for 5\nI0327 00:46:08.132563 3159 log.go:172] (0xc00098a000) (5) Data frame handling\nI0327 00:46:08.132584 3159 log.go:172] (0xc00098a000) (5) Data frame sent\n+ nslookup clusterip-service\nI0327 00:46:08.141834 3159 log.go:172] (0xc000c080b0) Data frame received for 3\nI0327 00:46:08.141864 3159 log.go:172] (0xc000922000) (3) Data frame handling\nI0327 00:46:08.141890 3159 log.go:172] (0xc000922000) (3) Data frame sent\nI0327 00:46:08.142635 3159 log.go:172] (0xc000c080b0) Data frame received for 3\nI0327 00:46:08.142654 3159 log.go:172] (0xc000922000) (3) Data frame handling\nI0327 00:46:08.142665 3159 log.go:172] (0xc000922000) (3) Data frame sent\nI0327 00:46:08.142817 3159 log.go:172] (0xc000c080b0) Data frame received for 3\nI0327 00:46:08.142827 3159 log.go:172] (0xc000922000) (3) Data frame handling\nI0327 00:46:08.143162 3159 log.go:172] (0xc000c080b0) Data frame received for 5\nI0327 00:46:08.143178 3159 log.go:172] (0xc00098a000) (5) Data frame handling\nI0327 00:46:08.145338 3159 log.go:172] (0xc000c080b0) Data frame received for 1\nI0327 00:46:08.145442 3159 log.go:172] (0xc0006c9540) (1) Data frame handling\nI0327 00:46:08.145471 3159 log.go:172] (0xc0006c9540) (1) Data frame sent\nI0327 00:46:08.145487 3159 log.go:172] (0xc000c080b0) (0xc0006c9540) Stream removed, broadcasting: 1\nI0327 00:46:08.145506 3159 log.go:172] (0xc000c080b0) Go away received\nI0327 00:46:08.146057 3159 log.go:172] (0xc000c080b0) (0xc0006c9540) Stream removed, broadcasting: 1\nI0327 00:46:08.146089 3159 log.go:172] (0xc000c080b0) (0xc000922000) Stream removed, broadcasting: 3\nI0327 00:46:08.146101 3159 log.go:172] (0xc000c080b0) (0xc00098a000) Stream removed, broadcasting: 5\n" Mar 27 00:46:08.151: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-55.svc.cluster.local\tcanonical name = externalsvc.services-55.svc.cluster.local.\nName:\texternalsvc.services-55.svc.cluster.local\nAddress: 10.96.96.106\n\n" STEP: deleting ReplicationController externalsvc in namespace services-55, will wait for the garbage collector to delete the pods Mar 27 00:46:08.211: INFO: Deleting ReplicationController externalsvc took: 6.628669ms Mar 27 00:46:08.511: INFO: Terminating ReplicationController externalsvc pods took: 300.262584ms Mar 27 00:46:23.037: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:46:23.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-55" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:25.491 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":252,"skipped":4379,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:46:23.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 27 00:46:23.922: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 27 00:46:25.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866783, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866783, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866784, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720866783, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:46:28.960: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 27 00:46:33.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-6742 to-be-attached-pod -i -c=container1' Mar 27 00:46:33.139: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:46:33.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6742" for this suite. STEP: Destroying namespace "webhook-6742-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.154 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":253,"skipped":4388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:46:33.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-edc11d5b-5bba-4667-bdbf-4a6d5d7b7d9a STEP: Creating a pod to test consume configMaps Mar 27 00:46:33.345: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-07963a71-0a66-4345-b6c5-68fccc1dffd0" in namespace "projected-9342" to be "Succeeded or Failed" Mar 27 00:46:33.360: INFO: Pod "pod-projected-configmaps-07963a71-0a66-4345-b6c5-68fccc1dffd0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.68598ms Mar 27 00:46:35.377: INFO: Pod "pod-projected-configmaps-07963a71-0a66-4345-b6c5-68fccc1dffd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032619415s Mar 27 00:46:37.381: INFO: Pod "pod-projected-configmaps-07963a71-0a66-4345-b6c5-68fccc1dffd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036240668s STEP: Saw pod success Mar 27 00:46:37.381: INFO: Pod "pod-projected-configmaps-07963a71-0a66-4345-b6c5-68fccc1dffd0" satisfied condition "Succeeded or Failed" Mar 27 00:46:37.383: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-07963a71-0a66-4345-b6c5-68fccc1dffd0 container projected-configmap-volume-test: STEP: delete the pod Mar 27 00:46:37.402: INFO: Waiting for pod pod-projected-configmaps-07963a71-0a66-4345-b6c5-68fccc1dffd0 to disappear Mar 27 00:46:37.407: INFO: Pod pod-projected-configmaps-07963a71-0a66-4345-b6c5-68fccc1dffd0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:46:37.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9342" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4441,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:46:37.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 27 00:46:38.170: INFO: Pod name wrapped-volume-race-a8b062bb-424e-4cdd-92f1-fbcef94f7024: Found 0 pods out of 5 Mar 27 00:46:43.179: INFO: Pod name wrapped-volume-race-a8b062bb-424e-4cdd-92f1-fbcef94f7024: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a8b062bb-424e-4cdd-92f1-fbcef94f7024 in namespace emptydir-wrapper-5375, will wait for the garbage collector to delete the pods Mar 27 00:46:55.268: INFO: Deleting ReplicationController wrapped-volume-race-a8b062bb-424e-4cdd-92f1-fbcef94f7024 took: 6.546337ms Mar 27 00:46:55.668: INFO: Terminating ReplicationController wrapped-volume-race-a8b062bb-424e-4cdd-92f1-fbcef94f7024 pods took: 400.265014ms STEP: Creating RC which spawns configmap-volume pods Mar 27 00:47:04.097: INFO: Pod name wrapped-volume-race-9fc0b3b2-394f-413d-ab57-797adfbbb93e: Found 0 pods out of 5 Mar 27 00:47:09.104: INFO: Pod name wrapped-volume-race-9fc0b3b2-394f-413d-ab57-797adfbbb93e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9fc0b3b2-394f-413d-ab57-797adfbbb93e in namespace emptydir-wrapper-5375, will wait for the garbage collector to delete the pods Mar 27 00:47:23.197: INFO: Deleting ReplicationController wrapped-volume-race-9fc0b3b2-394f-413d-ab57-797adfbbb93e took: 5.590684ms Mar 27 00:47:23.598: INFO: Terminating ReplicationController wrapped-volume-race-9fc0b3b2-394f-413d-ab57-797adfbbb93e pods took: 400.273531ms STEP: Creating RC which spawns configmap-volume pods Mar 27 00:47:34.059: INFO: Pod name wrapped-volume-race-b1b6fad8-b782-409e-9c92-131417cd7d2d: Found 0 pods out of 5 Mar 27 00:47:39.066: INFO: Pod name wrapped-volume-race-b1b6fad8-b782-409e-9c92-131417cd7d2d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b1b6fad8-b782-409e-9c92-131417cd7d2d in namespace emptydir-wrapper-5375, will wait for the garbage collector to delete the pods Mar 27 00:47:53.163: INFO: Deleting ReplicationController wrapped-volume-race-b1b6fad8-b782-409e-9c92-131417cd7d2d took: 21.395236ms Mar 27 00:47:53.464: INFO: Terminating ReplicationController wrapped-volume-race-b1b6fad8-b782-409e-9c92-131417cd7d2d pods took: 300.261666ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:48:04.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5375" for this suite. • [SLOW TEST:87.190 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":255,"skipped":4455,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:48:04.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-3680032d-e892-4f4e-ac38-44018804f529 STEP: Creating a pod to test consume configMaps Mar 27 00:48:04.715: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00732f52-0d41-44ed-9ab4-61db4ecc516a" in namespace "projected-3333" to be "Succeeded or Failed" Mar 27 00:48:04.743: INFO: Pod "pod-projected-configmaps-00732f52-0d41-44ed-9ab4-61db4ecc516a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.008875ms Mar 27 00:48:06.748: INFO: Pod "pod-projected-configmaps-00732f52-0d41-44ed-9ab4-61db4ecc516a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032692854s Mar 27 00:48:08.752: INFO: Pod "pod-projected-configmaps-00732f52-0d41-44ed-9ab4-61db4ecc516a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036430573s STEP: Saw pod success Mar 27 00:48:08.752: INFO: Pod "pod-projected-configmaps-00732f52-0d41-44ed-9ab4-61db4ecc516a" satisfied condition "Succeeded or Failed" Mar 27 00:48:08.755: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-00732f52-0d41-44ed-9ab4-61db4ecc516a container projected-configmap-volume-test: STEP: delete the pod Mar 27 00:48:08.815: INFO: Waiting for pod pod-projected-configmaps-00732f52-0d41-44ed-9ab4-61db4ecc516a to disappear Mar 27 00:48:08.825: INFO: Pod pod-projected-configmaps-00732f52-0d41-44ed-9ab4-61db4ecc516a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:48:08.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3333" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4461,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:48:08.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:48:08.918: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Pending, waiting for it to be Running (with Ready = true) Mar 27 00:48:10.994: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Pending, waiting for it to be Running (with Ready = true) Mar 27 00:48:12.935: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = false) Mar 27 00:48:14.922: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = false) Mar 27 00:48:16.922: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = false) Mar 27 00:48:18.923: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = false) Mar 27 00:48:20.922: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = false) Mar 27 00:48:22.922: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = false) Mar 27 00:48:24.922: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = false) Mar 27 00:48:26.922: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = false) Mar 27 00:48:28.922: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = false) Mar 27 00:48:30.922: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = false) Mar 27 00:48:32.921: INFO: The status of Pod test-webserver-4741d367-5ebc-4e86-b734-2a5d7f16833a is Running (Ready = true) Mar 27 00:48:32.924: INFO: Container started at 2020-03-27 00:48:11 +0000 UTC, pod became ready at 2020-03-27 00:48:32 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:48:32.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7036" for this suite. • [SLOW TEST:24.102 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4466,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:48:32.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 27 00:48:32.980: INFO: namespace kubectl-7808 Mar 27 00:48:32.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7808' Mar 27 00:48:33.367: INFO: stderr: "" Mar 27 00:48:33.367: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 27 00:48:34.371: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:48:34.371: INFO: Found 0 / 1 Mar 27 00:48:35.372: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:48:35.372: INFO: Found 0 / 1 Mar 27 00:48:36.371: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:48:36.371: INFO: Found 0 / 1 Mar 27 00:48:37.371: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:48:37.371: INFO: Found 1 / 1 Mar 27 00:48:37.371: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 27 00:48:37.374: INFO: Selector matched 1 pods for map[app:agnhost] Mar 27 00:48:37.374: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 27 00:48:37.374: INFO: wait on agnhost-master startup in kubectl-7808 Mar 27 00:48:37.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-fzk48 agnhost-master --namespace=kubectl-7808' Mar 27 00:48:37.485: INFO: stderr: "" Mar 27 00:48:37.485: INFO: stdout: "Paused\n" STEP: exposing RC Mar 27 00:48:37.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7808' Mar 27 00:48:37.648: INFO: stderr: "" Mar 27 00:48:37.648: INFO: stdout: "service/rm2 exposed\n" Mar 27 00:48:37.651: INFO: Service rm2 in namespace kubectl-7808 found. STEP: exposing service Mar 27 00:48:39.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7808' Mar 27 00:48:39.783: INFO: stderr: "" Mar 27 00:48:39.783: INFO: stdout: "service/rm3 exposed\n" Mar 27 00:48:39.792: INFO: Service rm3 in namespace kubectl-7808 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:48:41.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7808" for this suite. • [SLOW TEST:8.875 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":258,"skipped":4478,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:48:41.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-2wwb STEP: Creating a pod to test atomic-volume-subpath Mar 27 00:48:41.912: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2wwb" in namespace "subpath-2683" to be "Succeeded or Failed" Mar 27 00:48:41.918: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390924ms Mar 27 00:48:43.922: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010156343s Mar 27 00:48:45.926: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Running", Reason="", readiness=true. Elapsed: 4.014035027s Mar 27 00:48:47.930: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Running", Reason="", readiness=true. Elapsed: 6.017844411s Mar 27 00:48:49.934: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Running", Reason="", readiness=true. Elapsed: 8.022093521s Mar 27 00:48:51.939: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Running", Reason="", readiness=true. Elapsed: 10.026851118s Mar 27 00:48:53.943: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Running", Reason="", readiness=true. Elapsed: 12.031421583s Mar 27 00:48:55.948: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Running", Reason="", readiness=true. Elapsed: 14.035824734s Mar 27 00:48:57.951: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Running", Reason="", readiness=true. Elapsed: 16.039655535s Mar 27 00:48:59.955: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Running", Reason="", readiness=true. Elapsed: 18.043613751s Mar 27 00:49:01.960: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Running", Reason="", readiness=true. Elapsed: 20.047920395s Mar 27 00:49:03.964: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Running", Reason="", readiness=true. Elapsed: 22.05201167s Mar 27 00:49:05.967: INFO: Pod "pod-subpath-test-configmap-2wwb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055654372s STEP: Saw pod success Mar 27 00:49:05.967: INFO: Pod "pod-subpath-test-configmap-2wwb" satisfied condition "Succeeded or Failed" Mar 27 00:49:05.970: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-2wwb container test-container-subpath-configmap-2wwb: STEP: delete the pod Mar 27 00:49:06.007: INFO: Waiting for pod pod-subpath-test-configmap-2wwb to disappear Mar 27 00:49:06.020: INFO: Pod pod-subpath-test-configmap-2wwb no longer exists STEP: Deleting pod pod-subpath-test-configmap-2wwb Mar 27 00:49:06.020: INFO: Deleting pod "pod-subpath-test-configmap-2wwb" in namespace "subpath-2683" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:49:06.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2683" for this suite. • [SLOW TEST:24.221 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":259,"skipped":4496,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:49:06.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 27 00:49:06.110: INFO: Waiting up to 5m0s for pod "pod-f5fc5ba0-84ac-46d5-89ba-b2ee87aa507c" in namespace "emptydir-8775" to be "Succeeded or Failed" Mar 27 00:49:06.115: INFO: Pod "pod-f5fc5ba0-84ac-46d5-89ba-b2ee87aa507c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.289814ms Mar 27 00:49:08.143: INFO: Pod "pod-f5fc5ba0-84ac-46d5-89ba-b2ee87aa507c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033162524s Mar 27 00:49:10.147: INFO: Pod "pod-f5fc5ba0-84ac-46d5-89ba-b2ee87aa507c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037608728s STEP: Saw pod success Mar 27 00:49:10.148: INFO: Pod "pod-f5fc5ba0-84ac-46d5-89ba-b2ee87aa507c" satisfied condition "Succeeded or Failed" Mar 27 00:49:10.150: INFO: Trying to get logs from node latest-worker2 pod pod-f5fc5ba0-84ac-46d5-89ba-b2ee87aa507c container test-container: STEP: delete the pod Mar 27 00:49:10.195: INFO: Waiting for pod pod-f5fc5ba0-84ac-46d5-89ba-b2ee87aa507c to disappear Mar 27 00:49:10.221: INFO: Pod pod-f5fc5ba0-84ac-46d5-89ba-b2ee87aa507c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:49:10.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8775" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4511,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:49:10.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 27 00:49:13.330: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:49:13.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3143" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4527,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:49:13.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:49:20.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3899" for this suite. • [SLOW TEST:7.155 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":262,"skipped":4527,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:49:20.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:49:20.584: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ecab4ed6-ecb1-462f-8808-9293b8d0ec8a" in namespace "security-context-test-4544" to be "Succeeded or Failed" Mar 27 00:49:20.601: INFO: Pod "busybox-readonly-false-ecab4ed6-ecb1-462f-8808-9293b8d0ec8a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.033797ms Mar 27 00:49:22.605: INFO: Pod "busybox-readonly-false-ecab4ed6-ecb1-462f-8808-9293b8d0ec8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02095592s Mar 27 00:49:24.608: INFO: Pod "busybox-readonly-false-ecab4ed6-ecb1-462f-8808-9293b8d0ec8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024311415s Mar 27 00:49:24.608: INFO: Pod "busybox-readonly-false-ecab4ed6-ecb1-462f-8808-9293b8d0ec8a" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:49:24.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4544" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4529,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:49:24.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Mar 27 00:49:24.669: INFO: Waiting up to 5m0s for pod "var-expansion-c21aec9a-e415-47d2-89de-367ccc356394" in namespace "var-expansion-693" to be "Succeeded or Failed" Mar 27 00:49:24.672: INFO: Pod "var-expansion-c21aec9a-e415-47d2-89de-367ccc356394": Phase="Pending", Reason="", readiness=false. Elapsed: 3.147236ms Mar 27 00:49:26.676: INFO: Pod "var-expansion-c21aec9a-e415-47d2-89de-367ccc356394": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007034163s Mar 27 00:49:28.680: INFO: Pod "var-expansion-c21aec9a-e415-47d2-89de-367ccc356394": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010878918s STEP: Saw pod success Mar 27 00:49:28.680: INFO: Pod "var-expansion-c21aec9a-e415-47d2-89de-367ccc356394" satisfied condition "Succeeded or Failed" Mar 27 00:49:28.683: INFO: Trying to get logs from node latest-worker2 pod var-expansion-c21aec9a-e415-47d2-89de-367ccc356394 container dapi-container: STEP: delete the pod Mar 27 00:49:28.712: INFO: Waiting for pod var-expansion-c21aec9a-e415-47d2-89de-367ccc356394 to disappear Mar 27 00:49:28.720: INFO: Pod var-expansion-c21aec9a-e415-47d2-89de-367ccc356394 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:49:28.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-693" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4531,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:49:28.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 27 00:49:28.813: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:28.832: INFO: Number of nodes with available pods: 0 Mar 27 00:49:28.832: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:49:29.836: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:29.838: INFO: Number of nodes with available pods: 0 Mar 27 00:49:29.838: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:49:30.835: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:30.838: INFO: Number of nodes with available pods: 0 Mar 27 00:49:30.838: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:49:31.837: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:31.841: INFO: Number of nodes with available pods: 1 Mar 27 00:49:31.841: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:49:32.837: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:32.841: INFO: Number of nodes with available pods: 1 Mar 27 00:49:32.841: INFO: Node latest-worker is running more than one daemon pod Mar 27 00:49:33.838: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:33.841: INFO: Number of nodes with available pods: 2 Mar 27 00:49:33.841: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 27 00:49:33.866: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:33.871: INFO: Number of nodes with available pods: 1 Mar 27 00:49:33.871: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:34.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:34.879: INFO: Number of nodes with available pods: 1 Mar 27 00:49:34.879: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:35.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:35.879: INFO: Number of nodes with available pods: 1 Mar 27 00:49:35.879: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:36.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:36.879: INFO: Number of nodes with available pods: 1 Mar 27 00:49:36.879: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:37.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:37.879: INFO: Number of nodes with available pods: 1 Mar 27 00:49:37.879: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:38.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:38.880: INFO: Number of nodes with available pods: 1 Mar 27 00:49:38.880: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:39.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:39.879: INFO: Number of nodes with available pods: 1 Mar 27 00:49:39.879: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:40.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:40.879: INFO: Number of nodes with available pods: 1 Mar 27 00:49:40.879: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:41.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:41.880: INFO: Number of nodes with available pods: 1 Mar 27 00:49:41.880: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:42.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:42.880: INFO: Number of nodes with available pods: 1 Mar 27 00:49:42.880: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:43.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:43.878: INFO: Number of nodes with available pods: 1 Mar 27 00:49:43.878: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:44.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:44.881: INFO: Number of nodes with available pods: 1 Mar 27 00:49:44.881: INFO: Node latest-worker2 is running more than one daemon pod Mar 27 00:49:45.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 27 00:49:45.879: INFO: Number of nodes with available pods: 2 Mar 27 00:49:45.879: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3390, will wait for the garbage collector to delete the pods Mar 27 00:49:45.940: INFO: Deleting DaemonSet.extensions daemon-set took: 6.307553ms Mar 27 00:49:46.241: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.209303ms Mar 27 00:49:49.557: INFO: Number of nodes with available pods: 0 Mar 27 00:49:49.557: INFO: Number of running nodes: 0, number of available pods: 0 Mar 27 00:49:49.559: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3390/daemonsets","resourceVersion":"3089326"},"items":null} Mar 27 00:49:49.562: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3390/pods","resourceVersion":"3089326"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:49:49.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3390" for this suite. • [SLOW TEST:20.849 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":265,"skipped":4532,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:49:49.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1773.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1773.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 27 00:49:55.708: INFO: DNS probes using dns-1773/dns-test-6fa4d785-9ffe-4677-8d91-b4813d6ef17e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:49:55.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1773" for this suite. • [SLOW TEST:6.192 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":266,"skipped":4540,"failed":0} SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:49:55.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:50:00.177: INFO: Waiting up to 5m0s for pod "client-envvars-48b48b41-fa4a-4c5e-a1f3-dea0e46ae327" in namespace "pods-8164" to be "Succeeded or Failed" Mar 27 00:50:00.197: INFO: Pod "client-envvars-48b48b41-fa4a-4c5e-a1f3-dea0e46ae327": Phase="Pending", Reason="", readiness=false. Elapsed: 20.041443ms Mar 27 00:50:02.201: INFO: Pod "client-envvars-48b48b41-fa4a-4c5e-a1f3-dea0e46ae327": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023946835s Mar 27 00:50:04.205: INFO: Pod "client-envvars-48b48b41-fa4a-4c5e-a1f3-dea0e46ae327": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02826961s STEP: Saw pod success Mar 27 00:50:04.205: INFO: Pod "client-envvars-48b48b41-fa4a-4c5e-a1f3-dea0e46ae327" satisfied condition "Succeeded or Failed" Mar 27 00:50:04.208: INFO: Trying to get logs from node latest-worker2 pod client-envvars-48b48b41-fa4a-4c5e-a1f3-dea0e46ae327 container env3cont: STEP: delete the pod Mar 27 00:50:04.242: INFO: Waiting for pod client-envvars-48b48b41-fa4a-4c5e-a1f3-dea0e46ae327 to disappear Mar 27 00:50:04.255: INFO: Pod client-envvars-48b48b41-fa4a-4c5e-a1f3-dea0e46ae327 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:50:04.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8164" for this suite. • [SLOW TEST:8.493 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4543,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:50:04.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-5eeaa93e-579e-4ab6-83af-38661fa09b45 STEP: Creating a pod to test consume configMaps Mar 27 00:50:04.344: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-127b3ff6-7708-4bf6-80ce-4c8a429bd74c" in namespace "projected-1676" to be "Succeeded or Failed" Mar 27 00:50:04.350: INFO: Pod "pod-projected-configmaps-127b3ff6-7708-4bf6-80ce-4c8a429bd74c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126616ms Mar 27 00:50:06.354: INFO: Pod "pod-projected-configmaps-127b3ff6-7708-4bf6-80ce-4c8a429bd74c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00998701s Mar 27 00:50:08.358: INFO: Pod "pod-projected-configmaps-127b3ff6-7708-4bf6-80ce-4c8a429bd74c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014130323s STEP: Saw pod success Mar 27 00:50:08.358: INFO: Pod "pod-projected-configmaps-127b3ff6-7708-4bf6-80ce-4c8a429bd74c" satisfied condition "Succeeded or Failed" Mar 27 00:50:08.361: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-127b3ff6-7708-4bf6-80ce-4c8a429bd74c container projected-configmap-volume-test: STEP: delete the pod Mar 27 00:50:08.434: INFO: Waiting for pod pod-projected-configmaps-127b3ff6-7708-4bf6-80ce-4c8a429bd74c to disappear Mar 27 00:50:08.446: INFO: Pod pod-projected-configmaps-127b3ff6-7708-4bf6-80ce-4c8a429bd74c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:50:08.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1676" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:50:08.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-fc402e76-239e-461c-bee7-27cd9eec8149 STEP: Creating a pod to test consume secrets Mar 27 00:50:08.520: INFO: Waiting up to 5m0s for pod "pod-secrets-9707321a-26d0-4185-9066-afe5d2d6e991" in namespace "secrets-5606" to be "Succeeded or Failed" Mar 27 00:50:08.523: INFO: Pod "pod-secrets-9707321a-26d0-4185-9066-afe5d2d6e991": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113211ms Mar 27 00:50:10.560: INFO: Pod "pod-secrets-9707321a-26d0-4185-9066-afe5d2d6e991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039898942s Mar 27 00:50:12.564: INFO: Pod "pod-secrets-9707321a-26d0-4185-9066-afe5d2d6e991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043675249s STEP: Saw pod success Mar 27 00:50:12.564: INFO: Pod "pod-secrets-9707321a-26d0-4185-9066-afe5d2d6e991" satisfied condition "Succeeded or Failed" Mar 27 00:50:12.567: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-9707321a-26d0-4185-9066-afe5d2d6e991 container secret-volume-test: STEP: delete the pod Mar 27 00:50:12.598: INFO: Waiting for pod pod-secrets-9707321a-26d0-4185-9066-afe5d2d6e991 to disappear Mar 27 00:50:12.614: INFO: Pod pod-secrets-9707321a-26d0-4185-9066-afe5d2d6e991 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:50:12.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5606" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4585,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:50:12.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 27 00:50:13.432: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 27 00:50:15.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720867013, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720867013, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720867013, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720867013, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 27 00:50:18.475: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:50:18.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:50:19.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-500" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.083 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":270,"skipped":4590,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:50:19.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5383 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5383 I0327 00:50:19.818594 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5383, replica count: 2 I0327 00:50:22.868974 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0327 00:50:25.869421 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 27 00:50:25.869: INFO: Creating new exec pod Mar 27 00:50:30.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5383 execpodvnjpj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 27 00:50:31.113: INFO: stderr: "I0327 00:50:31.029973 3293 log.go:172] (0xc000b20580) (0xc000bb6000) Create stream\nI0327 00:50:31.030030 3293 log.go:172] (0xc000b20580) (0xc000bb6000) Stream added, broadcasting: 1\nI0327 00:50:31.032436 3293 log.go:172] (0xc000b20580) Reply frame received for 1\nI0327 00:50:31.032473 3293 log.go:172] (0xc000b20580) (0xc000bb60a0) Create stream\nI0327 00:50:31.032487 3293 log.go:172] (0xc000b20580) (0xc000bb60a0) Stream added, broadcasting: 3\nI0327 00:50:31.033462 3293 log.go:172] (0xc000b20580) Reply frame received for 3\nI0327 00:50:31.033492 3293 log.go:172] (0xc000b20580) (0xc0006b1360) Create stream\nI0327 00:50:31.033505 3293 log.go:172] (0xc000b20580) (0xc0006b1360) Stream added, broadcasting: 5\nI0327 00:50:31.034248 3293 log.go:172] (0xc000b20580) Reply frame received for 5\nI0327 00:50:31.105780 3293 log.go:172] (0xc000b20580) Data frame received for 5\nI0327 00:50:31.105818 3293 log.go:172] (0xc0006b1360) (5) Data frame handling\nI0327 00:50:31.105839 3293 log.go:172] (0xc0006b1360) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0327 00:50:31.106282 3293 log.go:172] (0xc000b20580) Data frame received for 5\nI0327 00:50:31.106306 3293 log.go:172] (0xc0006b1360) (5) Data frame handling\nI0327 00:50:31.106326 3293 log.go:172] (0xc0006b1360) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0327 00:50:31.106581 3293 log.go:172] (0xc000b20580) Data frame received for 3\nI0327 00:50:31.106605 3293 log.go:172] (0xc000bb60a0) (3) Data frame handling\nI0327 00:50:31.106721 3293 log.go:172] (0xc000b20580) Data frame received for 5\nI0327 00:50:31.106742 3293 log.go:172] (0xc0006b1360) (5) Data frame handling\nI0327 00:50:31.108653 3293 log.go:172] (0xc000b20580) Data frame received for 1\nI0327 00:50:31.108685 3293 log.go:172] (0xc000bb6000) (1) Data frame handling\nI0327 00:50:31.108706 3293 log.go:172] (0xc000bb6000) (1) Data frame sent\nI0327 00:50:31.108729 3293 log.go:172] (0xc000b20580) (0xc000bb6000) Stream removed, broadcasting: 1\nI0327 00:50:31.108761 3293 log.go:172] (0xc000b20580) Go away received\nI0327 00:50:31.109330 3293 log.go:172] (0xc000b20580) (0xc000bb6000) Stream removed, broadcasting: 1\nI0327 00:50:31.109357 3293 log.go:172] (0xc000b20580) (0xc000bb60a0) Stream removed, broadcasting: 3\nI0327 00:50:31.109370 3293 log.go:172] (0xc000b20580) (0xc0006b1360) Stream removed, broadcasting: 5\n" Mar 27 00:50:31.114: INFO: stdout: "" Mar 27 00:50:31.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5383 execpodvnjpj -- /bin/sh -x -c nc -zv -t -w 2 10.96.138.35 80' Mar 27 00:50:31.316: INFO: stderr: "I0327 00:50:31.238558 3315 log.go:172] (0xc0000e8370) (0xc000a6a000) Create stream\nI0327 00:50:31.238615 3315 log.go:172] (0xc0000e8370) (0xc000a6a000) Stream added, broadcasting: 1\nI0327 00:50:31.245893 3315 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0327 00:50:31.249743 3315 log.go:172] (0xc0000e8370) (0xc0009e8000) Create stream\nI0327 00:50:31.249767 3315 log.go:172] (0xc0000e8370) (0xc0009e8000) Stream added, broadcasting: 3\nI0327 00:50:31.250653 3315 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0327 00:50:31.250680 3315 log.go:172] (0xc0000e8370) (0xc0006bb180) Create stream\nI0327 00:50:31.250688 3315 log.go:172] (0xc0000e8370) (0xc0006bb180) Stream added, broadcasting: 5\nI0327 00:50:31.251327 3315 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0327 00:50:31.309819 3315 log.go:172] (0xc0000e8370) Data frame received for 5\nI0327 00:50:31.309868 3315 log.go:172] (0xc0006bb180) (5) Data frame handling\nI0327 00:50:31.309894 3315 log.go:172] (0xc0006bb180) (5) Data frame sent\nI0327 00:50:31.309911 3315 log.go:172] (0xc0000e8370) Data frame received for 5\nI0327 00:50:31.309926 3315 log.go:172] (0xc0006bb180) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.138.35 80\nConnection to 10.96.138.35 80 port [tcp/http] succeeded!\nI0327 00:50:31.309975 3315 log.go:172] (0xc0000e8370) Data frame received for 3\nI0327 00:50:31.310026 3315 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0327 00:50:31.311472 3315 log.go:172] (0xc0000e8370) Data frame received for 1\nI0327 00:50:31.311502 3315 log.go:172] (0xc000a6a000) (1) Data frame handling\nI0327 00:50:31.311529 3315 log.go:172] (0xc000a6a000) (1) Data frame sent\nI0327 00:50:31.311550 3315 log.go:172] (0xc0000e8370) (0xc000a6a000) Stream removed, broadcasting: 1\nI0327 00:50:31.311741 3315 log.go:172] (0xc0000e8370) Go away received\nI0327 00:50:31.311926 3315 log.go:172] (0xc0000e8370) (0xc000a6a000) Stream removed, broadcasting: 1\nI0327 00:50:31.311945 3315 log.go:172] (0xc0000e8370) (0xc0009e8000) Stream removed, broadcasting: 3\nI0327 00:50:31.311956 3315 log.go:172] (0xc0000e8370) (0xc0006bb180) Stream removed, broadcasting: 5\n" Mar 27 00:50:31.316: INFO: stdout: "" Mar 27 00:50:31.316: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:50:31.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5383" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.644 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":271,"skipped":4595,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:50:31.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Mar 27 00:50:31.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Mar 27 00:50:31.498: INFO: stderr: "" Mar 27 00:50:31.498: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:50:31.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2753" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":272,"skipped":4646,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:50:31.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:50:31.583: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 27 00:50:33.665: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:50:34.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8570" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":273,"skipped":4685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:50:34.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 27 00:50:34.829: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:50:36.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4552" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":274,"skipped":4709,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 27 00:50:36.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-76594e68-971a-490d-a5a9-ba8edf6d33e7 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 27 00:50:36.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9911" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":275,"skipped":4713,"failed":0} SSSSMar 27 00:50:36.529: INFO: Running AfterSuite actions on all nodes Mar 27 00:50:36.529: INFO: Running AfterSuite actions on node 1 Mar 27 00:50:36.529: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4444.590 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS