I0311 23:35:42.544756 7 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0311 23:35:42.545014 7 e2e.go:109] Starting e2e run "a4c78ec9-80ba-47d6-8b28-fe2c2fcf9aba" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583969741 - Will randomize all specs Will run 280 of 4845 specs Mar 11 23:35:42.633: INFO: >>> kubeConfig: /root/.kube/config Mar 11 23:35:42.635: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 11 23:35:42.649: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 11 23:35:42.672: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 11 23:35:42.672: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 11 23:35:42.672: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 11 23:35:42.678: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 11 23:35:42.678: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 11 23:35:42.678: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Mar 11 23:35:42.679: INFO: kube-apiserver version: v1.17.0 Mar 11 23:35:42.679: INFO: >>> kubeConfig: /root/.kube/config Mar 11 23:35:42.682: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:35:42.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper Mar 11 23:35:42.755: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 11 23:35:43.452: INFO: Pod name wrapped-volume-race-04fd9689-7022-4d33-8123-1968ebdbca62: Found 0 pods out of 5 Mar 11 23:35:48.456: INFO: Pod name wrapped-volume-race-04fd9689-7022-4d33-8123-1968ebdbca62: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-04fd9689-7022-4d33-8123-1968ebdbca62 in namespace emptydir-wrapper-1495, will wait for the garbage collector to delete the pods Mar 11 23:35:58.609: INFO: Deleting ReplicationController wrapped-volume-race-04fd9689-7022-4d33-8123-1968ebdbca62 took: 4.082357ms Mar 11 23:35:58.909: INFO: Terminating ReplicationController wrapped-volume-race-04fd9689-7022-4d33-8123-1968ebdbca62 pods took: 300.209294ms STEP: Creating RC which spawns configmap-volume pods Mar 11 23:36:12.632: INFO: Pod name wrapped-volume-race-3236c5e9-6bc3-4081-9a17-81dac3e692fd: Found 0 pods out of 5 Mar 11 23:36:17.638: INFO: Pod name wrapped-volume-race-3236c5e9-6bc3-4081-9a17-81dac3e692fd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3236c5e9-6bc3-4081-9a17-81dac3e692fd in namespace emptydir-wrapper-1495, will wait for the garbage collector to delete the pods Mar 11 23:36:27.727: INFO: Deleting ReplicationController wrapped-volume-race-3236c5e9-6bc3-4081-9a17-81dac3e692fd took: 19.439626ms Mar 11 23:36:28.028: INFO: Terminating ReplicationController wrapped-volume-race-3236c5e9-6bc3-4081-9a17-81dac3e692fd pods took: 301.36003ms STEP: Creating RC which spawns configmap-volume pods Mar 11 23:36:34.463: INFO: Pod name wrapped-volume-race-08c2c6bb-948c-4048-a569-cf9f14ffff94: Found 0 pods out of 5 Mar 11 23:36:39.469: INFO: Pod name wrapped-volume-race-08c2c6bb-948c-4048-a569-cf9f14ffff94: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-08c2c6bb-948c-4048-a569-cf9f14ffff94 in namespace emptydir-wrapper-1495, will wait for the garbage collector to delete the pods Mar 11 23:36:49.682: INFO: Deleting ReplicationController wrapped-volume-race-08c2c6bb-948c-4048-a569-cf9f14ffff94 took: 5.311786ms Mar 11 23:36:49.782: INFO: Terminating ReplicationController wrapped-volume-race-08c2c6bb-948c-4048-a569-cf9f14ffff94 pods took: 100.266943ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:36:57.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1495" for this suite. • [SLOW TEST:74.732 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":1,"skipped":30,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:36:57.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-02412f26-29ce-46b1-bd51-407bc5eca3b3 STEP: Creating a pod to test consume configMaps Mar 11 23:36:57.499: INFO: Waiting up to 5m0s for pod "pod-configmaps-9150f8aa-44ee-4f96-b700-178b1f40a7f8" in namespace "configmap-8166" to be "success or failure" Mar 11 23:36:57.550: INFO: Pod "pod-configmaps-9150f8aa-44ee-4f96-b700-178b1f40a7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.550161ms Mar 11 23:36:59.553: INFO: Pod "pod-configmaps-9150f8aa-44ee-4f96-b700-178b1f40a7f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.053830022s STEP: Saw pod success Mar 11 23:36:59.553: INFO: Pod "pod-configmaps-9150f8aa-44ee-4f96-b700-178b1f40a7f8" satisfied condition "success or failure" Mar 11 23:36:59.556: INFO: Trying to get logs from node latest-worker pod pod-configmaps-9150f8aa-44ee-4f96-b700-178b1f40a7f8 container configmap-volume-test: STEP: delete the pod Mar 11 23:36:59.583: INFO: Waiting for pod pod-configmaps-9150f8aa-44ee-4f96-b700-178b1f40a7f8 to disappear Mar 11 23:36:59.586: INFO: Pod pod-configmaps-9150f8aa-44ee-4f96-b700-178b1f40a7f8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:36:59.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8166" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":2,"skipped":35,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:36:59.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:36:59.751: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 11 23:37:02.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9075 create -f -' Mar 11 23:37:05.391: INFO: stderr: "" Mar 11 23:37:05.391: INFO: stdout: "e2e-test-crd-publish-openapi-8000-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 11 23:37:05.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9075 delete e2e-test-crd-publish-openapi-8000-crds test-cr' Mar 11 23:37:05.501: INFO: stderr: "" Mar 11 23:37:05.501: INFO: stdout: "e2e-test-crd-publish-openapi-8000-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 11 23:37:05.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9075 apply -f -' Mar 11 23:37:05.771: INFO: stderr: "" Mar 11 23:37:05.771: INFO: stdout: "e2e-test-crd-publish-openapi-8000-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 11 23:37:05.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9075 delete e2e-test-crd-publish-openapi-8000-crds test-cr' Mar 11 23:37:05.864: INFO: stderr: "" Mar 11 23:37:05.865: INFO: stdout: "e2e-test-crd-publish-openapi-8000-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 11 23:37:05.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8000-crds' Mar 11 23:37:06.102: INFO: stderr: "" Mar 11 23:37:06.102: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8000-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:09.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9075" for this suite. • [SLOW TEST:9.462 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":3,"skipped":45,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:09.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-8bdfbd0a-354b-4906-9f37-ce47c3a421d9 STEP: Creating a pod to test consume secrets Mar 11 23:37:09.192: INFO: Waiting up to 5m0s for pod "pod-secrets-2c42220a-24c7-4ca1-af9f-da5ee9f1a7fb" in namespace "secrets-1492" to be "success or failure" Mar 11 23:37:09.204: INFO: Pod "pod-secrets-2c42220a-24c7-4ca1-af9f-da5ee9f1a7fb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.447261ms Mar 11 23:37:11.208: INFO: Pod "pod-secrets-2c42220a-24c7-4ca1-af9f-da5ee9f1a7fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015211454s STEP: Saw pod success Mar 11 23:37:11.208: INFO: Pod "pod-secrets-2c42220a-24c7-4ca1-af9f-da5ee9f1a7fb" satisfied condition "success or failure" Mar 11 23:37:11.210: INFO: Trying to get logs from node latest-worker pod pod-secrets-2c42220a-24c7-4ca1-af9f-da5ee9f1a7fb container secret-volume-test: STEP: delete the pod Mar 11 23:37:11.232: INFO: Waiting for pod pod-secrets-2c42220a-24c7-4ca1-af9f-da5ee9f1a7fb to disappear Mar 11 23:37:11.236: INFO: Pod pod-secrets-2c42220a-24c7-4ca1-af9f-da5ee9f1a7fb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:11.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1492" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":4,"skipped":54,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:11.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 11 23:37:11.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52ce56df-0868-4c40-98f8-867ff5047453" in namespace "projected-9238" to be "success or failure" Mar 11 23:37:11.338: INFO: Pod "downwardapi-volume-52ce56df-0868-4c40-98f8-867ff5047453": Phase="Pending", Reason="", readiness=false. Elapsed: 15.654425ms Mar 11 23:37:13.342: INFO: Pod "downwardapi-volume-52ce56df-0868-4c40-98f8-867ff5047453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019034784s Mar 11 23:37:15.345: INFO: Pod "downwardapi-volume-52ce56df-0868-4c40-98f8-867ff5047453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022165619s STEP: Saw pod success Mar 11 23:37:15.345: INFO: Pod "downwardapi-volume-52ce56df-0868-4c40-98f8-867ff5047453" satisfied condition "success or failure" Mar 11 23:37:15.347: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-52ce56df-0868-4c40-98f8-867ff5047453 container client-container: STEP: delete the pod Mar 11 23:37:15.371: INFO: Waiting for pod downwardapi-volume-52ce56df-0868-4c40-98f8-867ff5047453 to disappear Mar 11 23:37:15.374: INFO: Pod downwardapi-volume-52ce56df-0868-4c40-98f8-867ff5047453 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:15.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9238" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":5,"skipped":57,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:15.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:15.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-4184" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":6,"skipped":61,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:15.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 11 23:37:15.658: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afd7f025-c18f-4f27-805d-75f4be405de1" in namespace "projected-800" to be "success or failure" Mar 11 23:37:15.662: INFO: Pod "downwardapi-volume-afd7f025-c18f-4f27-805d-75f4be405de1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.798158ms Mar 11 23:37:17.665: INFO: Pod "downwardapi-volume-afd7f025-c18f-4f27-805d-75f4be405de1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00703874s STEP: Saw pod success Mar 11 23:37:17.665: INFO: Pod "downwardapi-volume-afd7f025-c18f-4f27-805d-75f4be405de1" satisfied condition "success or failure" Mar 11 23:37:17.668: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-afd7f025-c18f-4f27-805d-75f4be405de1 container client-container: STEP: delete the pod Mar 11 23:37:17.721: INFO: Waiting for pod downwardapi-volume-afd7f025-c18f-4f27-805d-75f4be405de1 to disappear Mar 11 23:37:18.030: INFO: Pod downwardapi-volume-afd7f025-c18f-4f27-805d-75f4be405de1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:18.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-800" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":7,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:18.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 11 23:37:18.153: INFO: Waiting up to 5m0s for pod "downwardapi-volume-244dd8cb-c92e-49a9-b4ba-20bbe0737d52" in namespace "projected-6298" to be "success or failure" Mar 11 23:37:18.162: INFO: Pod "downwardapi-volume-244dd8cb-c92e-49a9-b4ba-20bbe0737d52": Phase="Pending", Reason="", readiness=false. Elapsed: 9.057508ms Mar 11 23:37:20.167: INFO: Pod "downwardapi-volume-244dd8cb-c92e-49a9-b4ba-20bbe0737d52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014619395s STEP: Saw pod success Mar 11 23:37:20.167: INFO: Pod "downwardapi-volume-244dd8cb-c92e-49a9-b4ba-20bbe0737d52" satisfied condition "success or failure" Mar 11 23:37:20.171: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-244dd8cb-c92e-49a9-b4ba-20bbe0737d52 container client-container: STEP: delete the pod Mar 11 23:37:20.225: INFO: Waiting for pod downwardapi-volume-244dd8cb-c92e-49a9-b4ba-20bbe0737d52 to disappear Mar 11 23:37:20.233: INFO: Pod downwardapi-volume-244dd8cb-c92e-49a9-b4ba-20bbe0737d52 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:20.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6298" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":8,"skipped":90,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:20.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 11 23:37:24.317: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7095 PodName:pod-sharedvolume-411ce580-5d77-4ac1-95db-ee97f3846a83 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 23:37:24.317: INFO: >>> kubeConfig: /root/.kube/config I0311 23:37:24.338314 7 log.go:172] (0xc001d36a50) (0xc002896d20) Create stream I0311 23:37:24.338343 7 log.go:172] (0xc001d36a50) (0xc002896d20) Stream added, broadcasting: 1 I0311 23:37:24.340133 7 log.go:172] (0xc001d36a50) Reply frame received for 1 I0311 23:37:24.340158 7 log.go:172] (0xc001d36a50) (0xc0027b0000) Create stream I0311 23:37:24.340166 7 log.go:172] (0xc001d36a50) (0xc0027b0000) Stream added, broadcasting: 3 I0311 23:37:24.340770 7 log.go:172] (0xc001d36a50) Reply frame received for 3 I0311 23:37:24.340793 7 log.go:172] (0xc001d36a50) (0xc00281efa0) Create stream I0311 23:37:24.340801 7 log.go:172] (0xc001d36a50) (0xc00281efa0) Stream added, broadcasting: 5 I0311 23:37:24.341346 7 log.go:172] (0xc001d36a50) Reply frame received for 5 I0311 23:37:24.395442 7 log.go:172] (0xc001d36a50) Data frame received for 3 I0311 23:37:24.395469 7 log.go:172] (0xc0027b0000) (3) Data frame handling I0311 23:37:24.395483 7 log.go:172] (0xc0027b0000) (3) Data frame sent I0311 23:37:24.395569 7 log.go:172] (0xc001d36a50) Data frame received for 3 I0311 23:37:24.395584 7 log.go:172] (0xc0027b0000) (3) Data frame handling I0311 23:37:24.395605 7 log.go:172] (0xc001d36a50) Data frame received for 5 I0311 23:37:24.395614 7 log.go:172] (0xc00281efa0) (5) Data frame handling I0311 23:37:24.396702 7 log.go:172] (0xc001d36a50) Data frame received for 1 I0311 23:37:24.396724 7 log.go:172] (0xc002896d20) (1) Data frame handling I0311 23:37:24.396735 7 log.go:172] (0xc002896d20) (1) Data frame sent I0311 23:37:24.396746 7 log.go:172] (0xc001d36a50) (0xc002896d20) Stream removed, broadcasting: 1 I0311 23:37:24.396763 7 log.go:172] (0xc001d36a50) Go away received I0311 23:37:24.397176 7 log.go:172] (0xc001d36a50) (0xc002896d20) Stream removed, broadcasting: 1 I0311 23:37:24.397204 7 log.go:172] (0xc001d36a50) (0xc0027b0000) Stream removed, broadcasting: 3 I0311 23:37:24.397214 7 log.go:172] (0xc001d36a50) (0xc00281efa0) Stream removed, broadcasting: 5 Mar 11 23:37:24.397: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:24.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7095" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":9,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:24.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-c2a7791c-5793-45cc-a3f8-098fc54ebb9c STEP: Creating a pod to test consume configMaps Mar 11 23:37:24.523: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3dd6f95e-81f9-4c26-878b-606d569fab1b" in namespace "projected-8297" to be "success or failure" Mar 11 23:37:24.547: INFO: Pod "pod-projected-configmaps-3dd6f95e-81f9-4c26-878b-606d569fab1b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.895881ms Mar 11 23:37:26.550: INFO: Pod "pod-projected-configmaps-3dd6f95e-81f9-4c26-878b-606d569fab1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027414669s STEP: Saw pod success Mar 11 23:37:26.550: INFO: Pod "pod-projected-configmaps-3dd6f95e-81f9-4c26-878b-606d569fab1b" satisfied condition "success or failure" Mar 11 23:37:26.553: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-3dd6f95e-81f9-4c26-878b-606d569fab1b container projected-configmap-volume-test: STEP: delete the pod Mar 11 23:37:26.585: INFO: Waiting for pod pod-projected-configmaps-3dd6f95e-81f9-4c26-878b-606d569fab1b to disappear Mar 11 23:37:26.590: INFO: Pod pod-projected-configmaps-3dd6f95e-81f9-4c26-878b-606d569fab1b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:26.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8297" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":10,"skipped":142,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:26.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 11 23:37:26.682: INFO: Waiting up to 5m0s for pod "pod-93fac8da-5b8a-4e60-899a-80756a789fd6" in namespace "emptydir-8496" to be "success or failure" Mar 11 23:37:26.709: INFO: Pod "pod-93fac8da-5b8a-4e60-899a-80756a789fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.446701ms Mar 11 23:37:28.712: INFO: Pod "pod-93fac8da-5b8a-4e60-899a-80756a789fd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.030625954s STEP: Saw pod success Mar 11 23:37:28.712: INFO: Pod "pod-93fac8da-5b8a-4e60-899a-80756a789fd6" satisfied condition "success or failure" Mar 11 23:37:28.715: INFO: Trying to get logs from node latest-worker2 pod pod-93fac8da-5b8a-4e60-899a-80756a789fd6 container test-container: STEP: delete the pod Mar 11 23:37:28.733: INFO: Waiting for pod pod-93fac8da-5b8a-4e60-899a-80756a789fd6 to disappear Mar 11 23:37:28.737: INFO: Pod pod-93fac8da-5b8a-4e60-899a-80756a789fd6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:28.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8496" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":11,"skipped":145,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:28.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:30.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3250" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":12,"skipped":156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:30.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6754 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6754 I0311 23:37:31.134225 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6754, replica count: 2 I0311 23:37:34.184611 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 11 23:37:34.184: INFO: Creating new exec pod Mar 11 23:37:37.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6754 execpodbkxdj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 11 23:37:37.423: INFO: stderr: "I0311 23:37:37.357557 146 log.go:172] (0xc0000e0b00) (0xc000655ea0) Create stream\nI0311 23:37:37.357595 146 log.go:172] (0xc0000e0b00) (0xc000655ea0) Stream added, broadcasting: 1\nI0311 23:37:37.359426 146 log.go:172] (0xc0000e0b00) Reply frame received for 1\nI0311 23:37:37.359458 146 log.go:172] (0xc0000e0b00) (0xc00062a780) Create stream\nI0311 23:37:37.359469 146 log.go:172] (0xc0000e0b00) (0xc00062a780) Stream added, broadcasting: 3\nI0311 23:37:37.360023 146 log.go:172] (0xc0000e0b00) Reply frame received for 3\nI0311 23:37:37.360041 146 log.go:172] (0xc0000e0b00) (0xc000bda000) Create stream\nI0311 23:37:37.360047 146 log.go:172] (0xc0000e0b00) (0xc000bda000) Stream added, broadcasting: 5\nI0311 23:37:37.360636 146 log.go:172] (0xc0000e0b00) Reply frame received for 5\nI0311 23:37:37.416543 146 log.go:172] (0xc0000e0b00) Data frame received for 5\nI0311 23:37:37.416562 146 log.go:172] (0xc000bda000) (5) Data frame handling\nI0311 23:37:37.416594 146 log.go:172] (0xc000bda000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0311 23:37:37.417414 146 log.go:172] (0xc0000e0b00) Data frame received for 5\nI0311 23:37:37.417440 146 log.go:172] (0xc000bda000) (5) Data frame handling\nI0311 23:37:37.417457 146 log.go:172] (0xc000bda000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0311 23:37:37.417870 146 log.go:172] (0xc0000e0b00) Data frame received for 3\nI0311 23:37:37.417913 146 log.go:172] (0xc00062a780) (3) Data frame handling\nI0311 23:37:37.417937 146 log.go:172] (0xc0000e0b00) Data frame received for 5\nI0311 23:37:37.417946 146 log.go:172] (0xc000bda000) (5) Data frame handling\nI0311 23:37:37.419569 146 log.go:172] (0xc0000e0b00) Data frame received for 1\nI0311 23:37:37.419584 146 log.go:172] (0xc000655ea0) (1) Data frame handling\nI0311 23:37:37.419593 146 log.go:172] (0xc000655ea0) (1) Data frame sent\nI0311 23:37:37.419601 146 log.go:172] (0xc0000e0b00) (0xc000655ea0) Stream removed, broadcasting: 1\nI0311 23:37:37.419608 146 log.go:172] (0xc0000e0b00) Go away received\nI0311 23:37:37.419938 146 log.go:172] (0xc0000e0b00) (0xc000655ea0) Stream removed, broadcasting: 1\nI0311 23:37:37.419955 146 log.go:172] (0xc0000e0b00) (0xc00062a780) Stream removed, broadcasting: 3\nI0311 23:37:37.419961 146 log.go:172] (0xc0000e0b00) (0xc000bda000) Stream removed, broadcasting: 5\n" Mar 11 23:37:37.423: INFO: stdout: "" Mar 11 23:37:37.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6754 execpodbkxdj -- /bin/sh -x -c nc -zv -t -w 2 10.96.152.167 80' Mar 11 23:37:37.611: INFO: stderr: "I0311 23:37:37.551335 166 log.go:172] (0xc000976630) (0xc000a34000) Create stream\nI0311 23:37:37.551381 166 log.go:172] (0xc000976630) (0xc000a34000) Stream added, broadcasting: 1\nI0311 23:37:37.553217 166 log.go:172] (0xc000976630) Reply frame received for 1\nI0311 23:37:37.553241 166 log.go:172] (0xc000976630) (0xc0006a9a40) Create stream\nI0311 23:37:37.553249 166 log.go:172] (0xc000976630) (0xc0006a9a40) Stream added, broadcasting: 3\nI0311 23:37:37.553874 166 log.go:172] (0xc000976630) Reply frame received for 3\nI0311 23:37:37.553899 166 log.go:172] (0xc000976630) (0xc0002ec000) Create stream\nI0311 23:37:37.553907 166 log.go:172] (0xc000976630) (0xc0002ec000) Stream added, broadcasting: 5\nI0311 23:37:37.554703 166 log.go:172] (0xc000976630) Reply frame received for 5\nI0311 23:37:37.607207 166 log.go:172] (0xc000976630) Data frame received for 3\nI0311 23:37:37.607226 166 log.go:172] (0xc0006a9a40) (3) Data frame handling\nI0311 23:37:37.607257 166 log.go:172] (0xc000976630) Data frame received for 5\nI0311 23:37:37.607278 166 log.go:172] (0xc0002ec000) (5) Data frame handling\nI0311 23:37:37.607293 166 log.go:172] (0xc0002ec000) (5) Data frame sent\nI0311 23:37:37.607298 166 log.go:172] (0xc000976630) Data frame received for 5\nI0311 23:37:37.607302 166 log.go:172] (0xc0002ec000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.152.167 80\nConnection to 10.96.152.167 80 port [tcp/http] succeeded!\nI0311 23:37:37.608482 166 log.go:172] (0xc000976630) Data frame received for 1\nI0311 23:37:37.608493 166 log.go:172] (0xc000a34000) (1) Data frame handling\nI0311 23:37:37.608498 166 log.go:172] (0xc000a34000) (1) Data frame sent\nI0311 23:37:37.608506 166 log.go:172] (0xc000976630) (0xc000a34000) Stream removed, broadcasting: 1\nI0311 23:37:37.608512 166 log.go:172] (0xc000976630) Go away received\nI0311 23:37:37.608801 166 log.go:172] (0xc000976630) (0xc000a34000) Stream removed, broadcasting: 1\nI0311 23:37:37.608814 166 log.go:172] (0xc000976630) (0xc0006a9a40) Stream removed, broadcasting: 3\nI0311 23:37:37.608820 166 log.go:172] (0xc000976630) (0xc0002ec000) Stream removed, broadcasting: 5\n" Mar 11 23:37:37.611: INFO: stdout: "" Mar 11 23:37:37.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6754 execpodbkxdj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.16 31331' Mar 11 23:37:37.772: INFO: stderr: "I0311 23:37:37.703637 186 log.go:172] (0xc00003a630) (0xc00071d9a0) Create stream\nI0311 23:37:37.703673 186 log.go:172] (0xc00003a630) (0xc00071d9a0) Stream added, broadcasting: 1\nI0311 23:37:37.705676 186 log.go:172] (0xc00003a630) Reply frame received for 1\nI0311 23:37:37.705706 186 log.go:172] (0xc00003a630) (0xc000ae0000) Create stream\nI0311 23:37:37.705717 186 log.go:172] (0xc00003a630) (0xc000ae0000) Stream added, broadcasting: 3\nI0311 23:37:37.706320 186 log.go:172] (0xc00003a630) Reply frame received for 3\nI0311 23:37:37.706343 186 log.go:172] (0xc00003a630) (0xc000240000) Create stream\nI0311 23:37:37.706349 186 log.go:172] (0xc00003a630) (0xc000240000) Stream added, broadcasting: 5\nI0311 23:37:37.706929 186 log.go:172] (0xc00003a630) Reply frame received for 5\nI0311 23:37:37.767916 186 log.go:172] (0xc00003a630) Data frame received for 3\nI0311 23:37:37.767952 186 log.go:172] (0xc000ae0000) (3) Data frame handling\nI0311 23:37:37.768043 186 log.go:172] (0xc00003a630) Data frame received for 5\nI0311 23:37:37.768064 186 log.go:172] (0xc000240000) (5) Data frame handling\nI0311 23:37:37.768078 186 log.go:172] (0xc000240000) (5) Data frame sent\nI0311 23:37:37.768085 186 log.go:172] (0xc00003a630) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.16 31331\nConnection to 172.17.0.16 31331 port [tcp/31331] succeeded!\nI0311 23:37:37.768089 186 log.go:172] (0xc000240000) (5) Data frame handling\nI0311 23:37:37.769112 186 log.go:172] (0xc00003a630) Data frame received for 1\nI0311 23:37:37.769123 186 log.go:172] (0xc00071d9a0) (1) Data frame handling\nI0311 23:37:37.769134 186 log.go:172] (0xc00071d9a0) (1) Data frame sent\nI0311 23:37:37.769141 186 log.go:172] (0xc00003a630) (0xc00071d9a0) Stream removed, broadcasting: 1\nI0311 23:37:37.769153 186 log.go:172] (0xc00003a630) Go away received\nI0311 23:37:37.769524 186 log.go:172] (0xc00003a630) (0xc00071d9a0) Stream removed, broadcasting: 1\nI0311 23:37:37.769542 186 log.go:172] (0xc00003a630) (0xc000ae0000) Stream removed, broadcasting: 3\nI0311 23:37:37.769550 186 log.go:172] (0xc00003a630) (0xc000240000) Stream removed, broadcasting: 5\n" Mar 11 23:37:37.772: INFO: stdout: "" Mar 11 23:37:37.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6754 execpodbkxdj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31331' Mar 11 23:37:37.943: INFO: stderr: "I0311 23:37:37.875771 209 log.go:172] (0xc00003a0b0) (0xc000163540) Create stream\nI0311 23:37:37.875808 209 log.go:172] (0xc00003a0b0) (0xc000163540) Stream added, broadcasting: 1\nI0311 23:37:37.877523 209 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0311 23:37:37.877563 209 log.go:172] (0xc00003a0b0) (0xc000682140) Create stream\nI0311 23:37:37.877573 209 log.go:172] (0xc00003a0b0) (0xc000682140) Stream added, broadcasting: 3\nI0311 23:37:37.878397 209 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0311 23:37:37.878426 209 log.go:172] (0xc00003a0b0) (0xc000163680) Create stream\nI0311 23:37:37.878437 209 log.go:172] (0xc00003a0b0) (0xc000163680) Stream added, broadcasting: 5\nI0311 23:37:37.879089 209 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0311 23:37:37.939621 209 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0311 23:37:37.939652 209 log.go:172] (0xc000682140) (3) Data frame handling\nI0311 23:37:37.939672 209 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0311 23:37:37.939678 209 log.go:172] (0xc000163680) (5) Data frame handling\nI0311 23:37:37.939686 209 log.go:172] (0xc000163680) (5) Data frame sent\nI0311 23:37:37.939696 209 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0311 23:37:37.939702 209 log.go:172] (0xc000163680) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31331\nConnection to 172.17.0.18 31331 port [tcp/31331] succeeded!\nI0311 23:37:37.940909 209 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0311 23:37:37.940931 209 log.go:172] (0xc000163540) (1) Data frame handling\nI0311 23:37:37.940942 209 log.go:172] (0xc000163540) (1) Data frame sent\nI0311 23:37:37.940998 209 log.go:172] (0xc00003a0b0) (0xc000163540) Stream removed, broadcasting: 1\nI0311 23:37:37.941020 209 log.go:172] (0xc00003a0b0) Go away received\nI0311 23:37:37.941296 209 log.go:172] (0xc00003a0b0) (0xc000163540) Stream removed, broadcasting: 1\nI0311 23:37:37.941308 209 log.go:172] (0xc00003a0b0) (0xc000682140) Stream removed, broadcasting: 3\nI0311 23:37:37.941313 209 log.go:172] (0xc00003a0b0) (0xc000163680) Stream removed, broadcasting: 5\n" Mar 11 23:37:37.943: INFO: stdout: "" Mar 11 23:37:37.943: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:38.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6754" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:7.084 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":13,"skipped":186,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:38.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 23:37:38.742: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 23:37:41.787: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:37:41.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-93-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:37:42.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7132" for this suite. STEP: Destroying namespace "webhook-7132-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.097 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":14,"skipped":232,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:37:43.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:37:43.360: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 11 23:37:48.368: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 11 23:37:48.369: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 11 23:37:50.372: INFO: Creating deployment "test-rollover-deployment" Mar 11 23:37:50.408: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 11 23:37:52.412: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 11 23:37:52.417: INFO: Ensure that both replica sets have 1 created replica Mar 11 23:37:52.422: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 11 23:37:52.427: INFO: Updating deployment test-rollover-deployment Mar 11 23:37:52.427: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 11 23:37:54.433: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 11 23:37:54.437: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 11 23:37:54.442: INFO: all replica sets need to contain the pod-template-hash label Mar 11 23:37:54.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566674, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 23:37:56.448: INFO: all replica sets need to contain the pod-template-hash label Mar 11 23:37:56.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566674, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 23:37:58.461: INFO: all replica sets need to contain the pod-template-hash label Mar 11 23:37:58.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566674, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 23:38:00.452: INFO: all replica sets need to contain the pod-template-hash label Mar 11 23:38:00.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566674, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 23:38:02.449: INFO: all replica sets need to contain the pod-template-hash label Mar 11 23:38:02.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566674, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 23:38:04.471: INFO: Mar 11 23:38:04.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566684, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566670, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 23:38:06.448: INFO: Mar 11 23:38:06.448: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 11 23:38:06.456: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5617 /apis/apps/v1/namespaces/deployment-5617/deployments/test-rollover-deployment 446edf21-0d20-4267-b12e-3b6e09a58d3b 926429 2 2020-03-11 23:37:50 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cd5548 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-11 23:37:50 +0000 UTC,LastTransitionTime:2020-03-11 23:37:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-11 23:38:04 +0000 UTC,LastTransitionTime:2020-03-11 23:37:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 11 23:38:06.459: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-5617 /apis/apps/v1/namespaces/deployment-5617/replicasets/test-rollover-deployment-574d6dfbff c5309c0a-0f29-4af6-8144-486f82f1c6ac 926418 2 2020-03-11 23:37:52 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 446edf21-0d20-4267-b12e-3b6e09a58d3b 0xc002c8d187 0xc002c8d188}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c8d1f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 11 23:38:06.459: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 11 23:38:06.459: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5617 /apis/apps/v1/namespaces/deployment-5617/replicasets/test-rollover-controller c957551d-4328-4ff6-bb45-157cfb12cd0f 926427 2 2020-03-11 23:37:43 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 446edf21-0d20-4267-b12e-3b6e09a58d3b 0xc002c8d067 0xc002c8d068}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c8d0c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 11 23:38:06.459: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5617 /apis/apps/v1/namespaces/deployment-5617/replicasets/test-rollover-deployment-f6c94f66c d6011ba7-1c9f-434c-9ba2-057fad070284 926378 2 2020-03-11 23:37:50 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 446edf21-0d20-4267-b12e-3b6e09a58d3b 0xc002c8d270 0xc002c8d271}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c8d2e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 11 23:38:06.462: INFO: Pod "test-rollover-deployment-574d6dfbff-9j6m8" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-9j6m8 test-rollover-deployment-574d6dfbff- deployment-5617 /api/v1/namespaces/deployment-5617/pods/test-rollover-deployment-574d6dfbff-9j6m8 50be8933-298d-4e5c-91e5-88c9c1b7c726 926386 0 2020-03-11 23:37:52 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff c5309c0a-0f29-4af6-8144-486f82f1c6ac 0xc002bb7f17 0xc002bb7f18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fpcsb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fpcsb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fpcsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-11 23:37:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-11 23:37:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-11 23:37:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-11 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.196,StartTime:2020-03-11 23:37:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-11 23:37:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8828842f7e05ddd380f346a24d9e977e99d788d7e8e129571b699fb55cc90570,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.196,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:38:06.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5617" for this suite. • [SLOW TEST:23.341 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":15,"skipped":235,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:38:06.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 11 23:38:06.561: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b275863c-41bb-4c5a-8566-158560c1f8c8" in namespace "downward-api-2064" to be "success or failure" Mar 11 23:38:06.582: INFO: Pod "downwardapi-volume-b275863c-41bb-4c5a-8566-158560c1f8c8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.945279ms Mar 11 23:38:08.587: INFO: Pod "downwardapi-volume-b275863c-41bb-4c5a-8566-158560c1f8c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025372059s Mar 11 23:38:10.590: INFO: Pod "downwardapi-volume-b275863c-41bb-4c5a-8566-158560c1f8c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028742013s STEP: Saw pod success Mar 11 23:38:10.590: INFO: Pod "downwardapi-volume-b275863c-41bb-4c5a-8566-158560c1f8c8" satisfied condition "success or failure" Mar 11 23:38:10.593: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b275863c-41bb-4c5a-8566-158560c1f8c8 container client-container: STEP: delete the pod Mar 11 23:38:10.644: INFO: Waiting for pod downwardapi-volume-b275863c-41bb-4c5a-8566-158560c1f8c8 to disappear Mar 11 23:38:10.655: INFO: Pod downwardapi-volume-b275863c-41bb-4c5a-8566-158560c1f8c8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:38:10.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2064" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":16,"skipped":239,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:38:10.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 STEP: creating an pod Mar 11 23:38:10.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8977 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 11 23:38:10.814: INFO: stderr: "" Mar 11 23:38:10.814: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Waiting for log generator to start. Mar 11 23:38:10.814: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 11 23:38:10.814: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8977" to be "running and ready, or succeeded" Mar 11 23:38:10.840: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 25.303562ms Mar 11 23:38:12.843: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.028991953s Mar 11 23:38:12.843: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 11 23:38:12.843: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 11 23:38:12.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8977' Mar 11 23:38:12.967: INFO: stderr: "" Mar 11 23:38:12.967: INFO: stdout: "I0311 23:38:12.038739 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/9v6 202\nI0311 23:38:12.238847 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/s72 593\nI0311 23:38:12.438880 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/qz5 259\nI0311 23:38:12.638965 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/lc9c 282\nI0311 23:38:12.838898 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/8dj 251\n" STEP: limiting log lines Mar 11 23:38:12.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8977 --tail=1' Mar 11 23:38:13.072: INFO: stderr: "" Mar 11 23:38:13.072: INFO: stdout: "I0311 23:38:13.039085 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/htg 331\n" Mar 11 23:38:13.072: INFO: got output "I0311 23:38:13.039085 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/htg 331\n" STEP: limiting log bytes Mar 11 23:38:13.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8977 --limit-bytes=1' Mar 11 23:38:13.152: INFO: stderr: "" Mar 11 23:38:13.152: INFO: stdout: "I" Mar 11 23:38:13.152: INFO: got output "I" STEP: exposing timestamps Mar 11 23:38:13.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8977 --tail=1 --timestamps' Mar 11 23:38:13.231: INFO: stderr: "" Mar 11 23:38:13.231: INFO: stdout: "2020-03-11T23:38:13.039335279Z I0311 23:38:13.039085 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/htg 331\n" Mar 11 23:38:13.231: INFO: got output "2020-03-11T23:38:13.039335279Z I0311 23:38:13.039085 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/htg 331\n" STEP: restricting to a time range Mar 11 23:38:15.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8977 --since=1s' Mar 11 23:38:15.840: INFO: stderr: "" Mar 11 23:38:15.840: INFO: stdout: "I0311 23:38:14.838964 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/v796 461\nI0311 23:38:15.038887 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/ndht 461\nI0311 23:38:15.238951 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/k259 492\nI0311 23:38:15.438915 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/cp9 576\nI0311 23:38:15.638946 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/n6b 226\n" Mar 11 23:38:15.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8977 --since=24h' Mar 11 23:38:15.926: INFO: stderr: "" Mar 11 23:38:15.926: INFO: stdout: "I0311 23:38:12.038739 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/9v6 202\nI0311 23:38:12.238847 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/s72 593\nI0311 23:38:12.438880 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/qz5 259\nI0311 23:38:12.638965 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/lc9c 282\nI0311 23:38:12.838898 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/8dj 251\nI0311 23:38:13.039085 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/htg 331\nI0311 23:38:13.238952 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/vck 484\nI0311 23:38:13.438937 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/fhpz 541\nI0311 23:38:13.638938 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/tjrh 351\nI0311 23:38:13.838913 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/dh2f 211\nI0311 23:38:14.038952 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/vjq4 285\nI0311 23:38:14.238950 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/tt8c 581\nI0311 23:38:14.438920 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/mzmg 247\nI0311 23:38:14.638883 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/d878 526\nI0311 23:38:14.838964 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/v796 461\nI0311 23:38:15.038887 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/ndht 461\nI0311 23:38:15.238951 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/k259 492\nI0311 23:38:15.438915 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/cp9 576\nI0311 23:38:15.638946 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/n6b 226\nI0311 23:38:15.838911 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/9cxt 577\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472 Mar 11 23:38:15.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8977' Mar 11 23:38:22.489: INFO: stderr: "" Mar 11 23:38:22.489: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:38:22.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8977" for this suite. • [SLOW TEST:11.834 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":280,"completed":17,"skipped":251,"failed":0} [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:38:22.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 11 23:38:22.552: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b97649b-1579-4794-8c40-a84795c70758" in namespace "projected-8207" to be "success or failure" Mar 11 23:38:22.557: INFO: Pod "downwardapi-volume-6b97649b-1579-4794-8c40-a84795c70758": Phase="Pending", Reason="", readiness=false. Elapsed: 4.837365ms Mar 11 23:38:24.560: INFO: Pod "downwardapi-volume-6b97649b-1579-4794-8c40-a84795c70758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008114384s Mar 11 23:38:26.563: INFO: Pod "downwardapi-volume-6b97649b-1579-4794-8c40-a84795c70758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011618246s STEP: Saw pod success Mar 11 23:38:26.563: INFO: Pod "downwardapi-volume-6b97649b-1579-4794-8c40-a84795c70758" satisfied condition "success or failure" Mar 11 23:38:26.566: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6b97649b-1579-4794-8c40-a84795c70758 container client-container: STEP: delete the pod Mar 11 23:38:26.619: INFO: Waiting for pod downwardapi-volume-6b97649b-1579-4794-8c40-a84795c70758 to disappear Mar 11 23:38:26.621: INFO: Pod downwardapi-volume-6b97649b-1579-4794-8c40-a84795c70758 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:38:26.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8207" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":18,"skipped":251,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:38:26.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:38:26.677: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:38:27.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-846" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":280,"completed":19,"skipped":262,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:38:27.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:38:27.947: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:38:30.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9668" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":20,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:38:30.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 11 23:38:30.130: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 23:38:30.163: INFO: Number of nodes with available pods: 0 Mar 11 23:38:30.163: INFO: Node latest-worker is running more than one daemon pod Mar 11 23:38:31.167: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 23:38:31.170: INFO: Number of nodes with available pods: 0 Mar 11 23:38:31.170: INFO: Node latest-worker is running more than one daemon pod Mar 11 23:38:32.167: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 23:38:32.170: INFO: Number of nodes with available pods: 2 Mar 11 23:38:32.170: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 11 23:38:32.187: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 23:38:32.192: INFO: Number of nodes with available pods: 2 Mar 11 23:38:32.192: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2429, will wait for the garbage collector to delete the pods Mar 11 23:38:33.277: INFO: Deleting DaemonSet.extensions daemon-set took: 3.453498ms Mar 11 23:38:33.577: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.247614ms Mar 11 23:38:42.581: INFO: Number of nodes with available pods: 0 Mar 11 23:38:42.581: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 23:38:42.587: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2429/daemonsets","resourceVersion":"926754"},"items":null} Mar 11 23:38:42.589: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2429/pods","resourceVersion":"926754"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:38:42.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2429" for this suite. • [SLOW TEST:12.552 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":21,"skipped":288,"failed":0} [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:38:42.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 11 23:38:42.677: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3097 /api/v1/namespaces/watch-3097/configmaps/e2e-watch-test-watch-closed 4028fa92-efe5-4312-862a-76bb13e3b796 926760 0 2020-03-11 23:38:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 11 23:38:42.677: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3097 /api/v1/namespaces/watch-3097/configmaps/e2e-watch-test-watch-closed 4028fa92-efe5-4312-862a-76bb13e3b796 926761 0 2020-03-11 23:38:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 11 23:38:42.705: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3097 /api/v1/namespaces/watch-3097/configmaps/e2e-watch-test-watch-closed 4028fa92-efe5-4312-862a-76bb13e3b796 926762 0 2020-03-11 23:38:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 11 23:38:42.705: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3097 /api/v1/namespaces/watch-3097/configmaps/e2e-watch-test-watch-closed 4028fa92-efe5-4312-862a-76bb13e3b796 926763 0 2020-03-11 23:38:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:38:42.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3097" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":22,"skipped":288,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:38:42.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 23:38:43.592: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 23:38:45.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566723, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566723, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566723, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566723, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 23:38:48.634: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:38:48.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:38:49.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4427" for this suite. STEP: Destroying namespace "webhook-4427-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.183 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":23,"skipped":298,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:38:49.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0311 23:39:00.122582 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 23:39:00.122: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:39:00.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2592" for this suite. • [SLOW TEST:10.234 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":24,"skipped":314,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:39:00.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 11 23:39:00.219: INFO: Waiting up to 5m0s for pod "downward-api-a474030e-8267-4621-a76c-ca2623234acb" in namespace "downward-api-8985" to be "success or failure" Mar 11 23:39:00.227: INFO: Pod "downward-api-a474030e-8267-4621-a76c-ca2623234acb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.441533ms Mar 11 23:39:02.231: INFO: Pod "downward-api-a474030e-8267-4621-a76c-ca2623234acb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012496527s STEP: Saw pod success Mar 11 23:39:02.231: INFO: Pod "downward-api-a474030e-8267-4621-a76c-ca2623234acb" satisfied condition "success or failure" Mar 11 23:39:02.234: INFO: Trying to get logs from node latest-worker pod downward-api-a474030e-8267-4621-a76c-ca2623234acb container dapi-container: STEP: delete the pod Mar 11 23:39:02.259: INFO: Waiting for pod downward-api-a474030e-8267-4621-a76c-ca2623234acb to disappear Mar 11 23:39:02.263: INFO: Pod downward-api-a474030e-8267-4621-a76c-ca2623234acb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:39:02.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8985" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":25,"skipped":333,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:39:02.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:39:04.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5067" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":26,"skipped":352,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:39:04.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:39:04.462: INFO: Creating deployment "test-recreate-deployment" Mar 11 23:39:04.489: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 11 23:39:04.503: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 11 23:39:06.508: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 11 23:39:06.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566744, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566744, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566744, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566744, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 23:39:08.514: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 11 23:39:08.541: INFO: Updating deployment test-recreate-deployment Mar 11 23:39:08.541: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 11 23:39:08.749: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4989 /apis/apps/v1/namespaces/deployment-4989/deployments/test-recreate-deployment 2aad898d-29b7-4b7c-a5ff-0c2475eec4f5 927245 2 2020-03-11 23:39:04 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cc2748 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-11 23:39:08 +0000 UTC,LastTransitionTime:2020-03-11 23:39:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-11 23:39:08 +0000 UTC,LastTransitionTime:2020-03-11 23:39:04 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 11 23:39:08.755: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4989 /apis/apps/v1/namespaces/deployment-4989/replicasets/test-recreate-deployment-5f94c574ff 93b05037-764c-4b37-9f74-c18cd5e98096 927244 1 2020-03-11 23:39:08 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 2aad898d-29b7-4b7c-a5ff-0c2475eec4f5 0xc002d9c947 0xc002d9c948}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d9c9a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 11 23:39:08.755: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 11 23:39:08.755: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-4989 /apis/apps/v1/namespaces/deployment-4989/replicasets/test-recreate-deployment-799c574856 1883e3cb-5e8b-4a1e-b1ff-fd94b43c88e8 927233 2 2020-03-11 23:39:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 2aad898d-29b7-4b7c-a5ff-0c2475eec4f5 0xc002d9ca17 0xc002d9ca18}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d9cab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 11 23:39:08.760: INFO: Pod "test-recreate-deployment-5f94c574ff-j4r8p" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-j4r8p test-recreate-deployment-5f94c574ff- deployment-4989 /api/v1/namespaces/deployment-4989/pods/test-recreate-deployment-5f94c574ff-j4r8p 8d383fca-800e-435b-9768-a895fd09a8ac 927246 0 2020-03-11 23:39:08 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 93b05037-764c-4b37-9f74-c18cd5e98096 0xc002ca4867 0xc002ca4868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xwlrl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xwlrl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xwlrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-11 23:39:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-11 23:39:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-11 23:39:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-11 23:39:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-11 23:39:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:39:08.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4989" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":27,"skipped":364,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:39:08.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:39:08.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8147" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":28,"skipped":375,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:39:08.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 11 23:39:08.990: INFO: Waiting up to 5m0s for pod "pod-b079292d-f83e-4967-8e60-a32a7dac4626" in namespace "emptydir-2429" to be "success or failure" Mar 11 23:39:08.994: INFO: Pod "pod-b079292d-f83e-4967-8e60-a32a7dac4626": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0816ms Mar 11 23:39:10.997: INFO: Pod "pod-b079292d-f83e-4967-8e60-a32a7dac4626": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007740475s STEP: Saw pod success Mar 11 23:39:10.997: INFO: Pod "pod-b079292d-f83e-4967-8e60-a32a7dac4626" satisfied condition "success or failure" Mar 11 23:39:11.000: INFO: Trying to get logs from node latest-worker pod pod-b079292d-f83e-4967-8e60-a32a7dac4626 container test-container: STEP: delete the pod Mar 11 23:39:11.019: INFO: Waiting for pod pod-b079292d-f83e-4967-8e60-a32a7dac4626 to disappear Mar 11 23:39:11.040: INFO: Pod pod-b079292d-f83e-4967-8e60-a32a7dac4626 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:39:11.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2429" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":29,"skipped":384,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:39:11.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:39:28.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5081" for this suite. • [SLOW TEST:17.076 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":30,"skipped":384,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:39:28.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: executing a command with run --rm and attach with stdin Mar 11 23:39:28.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=kubectl-1371 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 11 23:39:29.877: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0311 23:39:29.823142 391 log.go:172] (0xc0000f4370) (0xc000681c20) Create stream\nI0311 23:39:29.823193 391 log.go:172] (0xc0000f4370) (0xc000681c20) Stream added, broadcasting: 1\nI0311 23:39:29.825726 391 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0311 23:39:29.825782 391 log.go:172] (0xc0000f4370) (0xc0006da000) Create stream\nI0311 23:39:29.825799 391 log.go:172] (0xc0000f4370) (0xc0006da000) Stream added, broadcasting: 3\nI0311 23:39:29.826842 391 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0311 23:39:29.826884 391 log.go:172] (0xc0000f4370) (0xc000681cc0) Create stream\nI0311 23:39:29.826894 391 log.go:172] (0xc0000f4370) (0xc000681cc0) Stream added, broadcasting: 5\nI0311 23:39:29.827846 391 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0311 23:39:29.827877 391 log.go:172] (0xc0000f4370) (0xc0006da0a0) Create stream\nI0311 23:39:29.827890 391 log.go:172] (0xc0000f4370) (0xc0006da0a0) Stream added, broadcasting: 7\nI0311 23:39:29.828911 391 log.go:172] (0xc0000f4370) Reply frame received for 7\nI0311 23:39:29.829113 391 log.go:172] (0xc0006da000) (3) Writing data frame\nI0311 23:39:29.829242 391 log.go:172] (0xc0006da000) (3) Writing data frame\nI0311 23:39:29.830077 391 log.go:172] (0xc0000f4370) Data frame received for 5\nI0311 23:39:29.830099 391 log.go:172] (0xc000681cc0) (5) Data frame handling\nI0311 23:39:29.830156 391 log.go:172] (0xc000681cc0) (5) Data frame sent\nI0311 23:39:29.830735 391 log.go:172] (0xc0000f4370) Data frame received for 5\nI0311 23:39:29.830749 391 log.go:172] (0xc000681cc0) (5) Data frame handling\nI0311 23:39:29.830760 391 log.go:172] (0xc000681cc0) (5) Data frame sent\nI0311 23:39:29.851798 391 log.go:172] (0xc0000f4370) Data frame received for 5\nI0311 23:39:29.851818 391 log.go:172] (0xc0000f4370) Data frame received for 7\nI0311 23:39:29.851843 391 log.go:172] (0xc0006da0a0) (7) Data frame handling\nI0311 23:39:29.851886 391 log.go:172] (0xc000681cc0) (5) Data frame handling\nI0311 23:39:29.852462 391 log.go:172] (0xc0000f4370) Data frame received for 1\nI0311 23:39:29.852484 391 log.go:172] (0xc000681c20) (1) Data frame handling\nI0311 23:39:29.852512 391 log.go:172] (0xc000681c20) (1) Data frame sent\nI0311 23:39:29.852698 391 log.go:172] (0xc0000f4370) (0xc000681c20) Stream removed, broadcasting: 1\nI0311 23:39:29.852745 391 log.go:172] (0xc0000f4370) (0xc0006da000) Stream removed, broadcasting: 3\nI0311 23:39:29.852772 391 log.go:172] (0xc0000f4370) Go away received\nI0311 23:39:29.853071 391 log.go:172] (0xc0000f4370) (0xc000681c20) Stream removed, broadcasting: 1\nI0311 23:39:29.853099 391 log.go:172] (0xc0000f4370) (0xc0006da000) Stream removed, broadcasting: 3\nI0311 23:39:29.853114 391 log.go:172] (0xc0000f4370) (0xc000681cc0) Stream removed, broadcasting: 5\nI0311 23:39:29.853128 391 log.go:172] (0xc0000f4370) (0xc0006da0a0) Stream removed, broadcasting: 7\n" Mar 11 23:39:29.877: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:39:31.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1371" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":280,"completed":31,"skipped":388,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:39:31.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:39:33.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3392" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":32,"skipped":391,"failed":0} ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:39:33.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 11 23:39:36.732: INFO: Successfully updated pod "labelsupdateaf60c630-f7f4-4167-b8be-9fa3cc00f0d4" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:39:40.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5844" for this suite. • [SLOW TEST:6.777 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":33,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:39:40.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-7l9r STEP: Creating a pod to test atomic-volume-subpath Mar 11 23:39:40.897: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7l9r" in namespace "subpath-4241" to be "success or failure" Mar 11 23:39:40.914: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Pending", Reason="", readiness=false. Elapsed: 16.47031ms Mar 11 23:39:42.916: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Running", Reason="", readiness=true. Elapsed: 2.019093769s Mar 11 23:39:44.920: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Running", Reason="", readiness=true. Elapsed: 4.022290847s Mar 11 23:39:46.922: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Running", Reason="", readiness=true. Elapsed: 6.025016423s Mar 11 23:39:48.926: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Running", Reason="", readiness=true. Elapsed: 8.028286285s Mar 11 23:39:50.929: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Running", Reason="", readiness=true. Elapsed: 10.032004123s Mar 11 23:39:52.949: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Running", Reason="", readiness=true. Elapsed: 12.051796127s Mar 11 23:39:54.961: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Running", Reason="", readiness=true. Elapsed: 14.06336028s Mar 11 23:39:56.964: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Running", Reason="", readiness=true. Elapsed: 16.066776831s Mar 11 23:39:58.979: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Running", Reason="", readiness=true. Elapsed: 18.082034108s Mar 11 23:40:00.983: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Running", Reason="", readiness=true. Elapsed: 20.085957267s Mar 11 23:40:02.997: INFO: Pod "pod-subpath-test-configmap-7l9r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.100045454s STEP: Saw pod success Mar 11 23:40:02.997: INFO: Pod "pod-subpath-test-configmap-7l9r" satisfied condition "success or failure" Mar 11 23:40:03.001: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-7l9r container test-container-subpath-configmap-7l9r: STEP: delete the pod Mar 11 23:40:03.047: INFO: Waiting for pod pod-subpath-test-configmap-7l9r to disappear Mar 11 23:40:03.061: INFO: Pod pod-subpath-test-configmap-7l9r no longer exists STEP: Deleting pod pod-subpath-test-configmap-7l9r Mar 11 23:40:03.061: INFO: Deleting pod "pod-subpath-test-configmap-7l9r" in namespace "subpath-4241" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:03.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4241" for this suite. • [SLOW TEST:22.295 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":34,"skipped":428,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:03.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:40:03.156: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 11 23:40:05.194: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:06.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3536" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":35,"skipped":439,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:06.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 11 23:40:06.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2372' Mar 11 23:40:06.413: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 23:40:06.413: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604 Mar 11 23:40:08.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2372' Mar 11 23:40:08.557: INFO: stderr: "" Mar 11 23:40:08.557: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:08.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2372" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":280,"completed":36,"skipped":453,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:08.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 11 23:40:08.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-856' Mar 11 23:40:08.688: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 23:40:08.688: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Mar 11 23:40:08.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-856' Mar 11 23:40:08.809: INFO: stderr: "" Mar 11 23:40:08.809: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:08.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-856" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":280,"completed":37,"skipped":470,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:08.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 11 23:40:08.898: INFO: Waiting up to 5m0s for pod "pod-71dcc8f4-1aa9-465f-81b7-a90d7af1c70a" in namespace "emptydir-6517" to be "success or failure" Mar 11 23:40:08.902: INFO: Pod "pod-71dcc8f4-1aa9-465f-81b7-a90d7af1c70a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.426151ms Mar 11 23:40:10.925: INFO: Pod "pod-71dcc8f4-1aa9-465f-81b7-a90d7af1c70a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027829303s STEP: Saw pod success Mar 11 23:40:10.926: INFO: Pod "pod-71dcc8f4-1aa9-465f-81b7-a90d7af1c70a" satisfied condition "success or failure" Mar 11 23:40:10.928: INFO: Trying to get logs from node latest-worker2 pod pod-71dcc8f4-1aa9-465f-81b7-a90d7af1c70a container test-container: STEP: delete the pod Mar 11 23:40:10.946: INFO: Waiting for pod pod-71dcc8f4-1aa9-465f-81b7-a90d7af1c70a to disappear Mar 11 23:40:10.976: INFO: Pod pod-71dcc8f4-1aa9-465f-81b7-a90d7af1c70a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:10.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6517" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":38,"skipped":473,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:10.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 11 23:40:11.068: INFO: Waiting up to 5m0s for pod "pod-f3fd7d9a-197c-4724-b87f-a94614a51a20" in namespace "emptydir-1541" to be "success or failure" Mar 11 23:40:11.080: INFO: Pod "pod-f3fd7d9a-197c-4724-b87f-a94614a51a20": Phase="Pending", Reason="", readiness=false. Elapsed: 11.661011ms Mar 11 23:40:13.084: INFO: Pod "pod-f3fd7d9a-197c-4724-b87f-a94614a51a20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015262264s STEP: Saw pod success Mar 11 23:40:13.084: INFO: Pod "pod-f3fd7d9a-197c-4724-b87f-a94614a51a20" satisfied condition "success or failure" Mar 11 23:40:13.089: INFO: Trying to get logs from node latest-worker2 pod pod-f3fd7d9a-197c-4724-b87f-a94614a51a20 container test-container: STEP: delete the pod Mar 11 23:40:13.120: INFO: Waiting for pod pod-f3fd7d9a-197c-4724-b87f-a94614a51a20 to disappear Mar 11 23:40:13.123: INFO: Pod pod-f3fd7d9a-197c-4724-b87f-a94614a51a20 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:13.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1541" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":39,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:13.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 11 23:40:13.190: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:16.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8254" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":40,"skipped":505,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:16.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3625 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3625 STEP: creating replication controller externalsvc in namespace services-3625 I0311 23:40:16.815248 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3625, replica count: 2 I0311 23:40:19.865683 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 11 23:40:19.910: INFO: Creating new exec pod Mar 11 23:40:21.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-3625 execpodfkghs -- /bin/sh -x -c nslookup nodeport-service' Mar 11 23:40:22.152: INFO: stderr: "I0311 23:40:22.079706 483 log.go:172] (0xc0003c5ef0) (0xc000988640) Create stream\nI0311 23:40:22.079767 483 log.go:172] (0xc0003c5ef0) (0xc000988640) Stream added, broadcasting: 1\nI0311 23:40:22.084112 483 log.go:172] (0xc0003c5ef0) Reply frame received for 1\nI0311 23:40:22.084154 483 log.go:172] (0xc0003c5ef0) (0xc000815c20) Create stream\nI0311 23:40:22.084171 483 log.go:172] (0xc0003c5ef0) (0xc000815c20) Stream added, broadcasting: 3\nI0311 23:40:22.087261 483 log.go:172] (0xc0003c5ef0) Reply frame received for 3\nI0311 23:40:22.087378 483 log.go:172] (0xc0003c5ef0) (0xc000815cc0) Create stream\nI0311 23:40:22.087421 483 log.go:172] (0xc0003c5ef0) (0xc000815cc0) Stream added, broadcasting: 5\nI0311 23:40:22.088780 483 log.go:172] (0xc0003c5ef0) Reply frame received for 5\nI0311 23:40:22.140386 483 log.go:172] (0xc0003c5ef0) Data frame received for 5\nI0311 23:40:22.140417 483 log.go:172] (0xc000815cc0) (5) Data frame handling\nI0311 23:40:22.140436 483 log.go:172] (0xc000815cc0) (5) Data frame sent\n+ nslookup nodeport-service\nI0311 23:40:22.146494 483 log.go:172] (0xc0003c5ef0) Data frame received for 3\nI0311 23:40:22.146521 483 log.go:172] (0xc000815c20) (3) Data frame handling\nI0311 23:40:22.146534 483 log.go:172] (0xc000815c20) (3) Data frame sent\nI0311 23:40:22.146969 483 log.go:172] (0xc0003c5ef0) Data frame received for 3\nI0311 23:40:22.146987 483 log.go:172] (0xc000815c20) (3) Data frame handling\nI0311 23:40:22.146994 483 log.go:172] (0xc000815c20) (3) Data frame sent\nI0311 23:40:22.147046 483 log.go:172] (0xc0003c5ef0) Data frame received for 3\nI0311 23:40:22.147061 483 log.go:172] (0xc000815c20) (3) Data frame handling\nI0311 23:40:22.147274 483 log.go:172] (0xc0003c5ef0) Data frame received for 5\nI0311 23:40:22.147291 483 log.go:172] (0xc000815cc0) (5) Data frame handling\nI0311 23:40:22.148756 483 log.go:172] (0xc0003c5ef0) Data frame received for 1\nI0311 23:40:22.148778 483 log.go:172] (0xc000988640) (1) Data frame handling\nI0311 23:40:22.148791 483 log.go:172] (0xc000988640) (1) Data frame sent\nI0311 23:40:22.148804 483 log.go:172] (0xc0003c5ef0) (0xc000988640) Stream removed, broadcasting: 1\nI0311 23:40:22.148852 483 log.go:172] (0xc0003c5ef0) Go away received\nI0311 23:40:22.149063 483 log.go:172] (0xc0003c5ef0) (0xc000988640) Stream removed, broadcasting: 1\nI0311 23:40:22.149076 483 log.go:172] (0xc0003c5ef0) (0xc000815c20) Stream removed, broadcasting: 3\nI0311 23:40:22.149082 483 log.go:172] (0xc0003c5ef0) (0xc000815cc0) Stream removed, broadcasting: 5\n" Mar 11 23:40:22.152: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3625.svc.cluster.local\tcanonical name = externalsvc.services-3625.svc.cluster.local.\nName:\texternalsvc.services-3625.svc.cluster.local\nAddress: 10.96.40.241\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3625, will wait for the garbage collector to delete the pods Mar 11 23:40:22.209: INFO: Deleting ReplicationController externalsvc took: 4.449302ms Mar 11 23:40:22.310: INFO: Terminating ReplicationController externalsvc pods took: 100.175428ms Mar 11 23:40:32.657: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:32.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3625" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:16.034 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":41,"skipped":507,"failed":0} S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:32.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2b641fd5-31b3-4f22-8058-eeaa4c0f2ba9 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2b641fd5-31b3-4f22-8058-eeaa4c0f2ba9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:36.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4939" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":42,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:36.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-0c9cb1f8-cb83-4940-b823-79e59254c38e STEP: Creating a pod to test consume secrets Mar 11 23:40:36.883: INFO: Waiting up to 5m0s for pod "pod-secrets-5ecd90ea-61b4-4ce4-8c95-4d5aeca303c5" in namespace "secrets-1159" to be "success or failure" Mar 11 23:40:36.902: INFO: Pod "pod-secrets-5ecd90ea-61b4-4ce4-8c95-4d5aeca303c5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.244473ms Mar 11 23:40:38.906: INFO: Pod "pod-secrets-5ecd90ea-61b4-4ce4-8c95-4d5aeca303c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022917065s STEP: Saw pod success Mar 11 23:40:38.906: INFO: Pod "pod-secrets-5ecd90ea-61b4-4ce4-8c95-4d5aeca303c5" satisfied condition "success or failure" Mar 11 23:40:38.908: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5ecd90ea-61b4-4ce4-8c95-4d5aeca303c5 container secret-env-test: STEP: delete the pod Mar 11 23:40:38.947: INFO: Waiting for pod pod-secrets-5ecd90ea-61b4-4ce4-8c95-4d5aeca303c5 to disappear Mar 11 23:40:38.973: INFO: Pod pod-secrets-5ecd90ea-61b4-4ce4-8c95-4d5aeca303c5 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:38.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1159" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":43,"skipped":552,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:38.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0311 23:40:45.345356 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 23:40:45.345: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:45.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1291" for this suite. • [SLOW TEST:6.395 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":44,"skipped":571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:45.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 23:40:46.286: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 23:40:48.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566846, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566846, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566846, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566846, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 23:40:51.331: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:51.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5162" for this suite. STEP: Destroying namespace "webhook-5162-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.098 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":45,"skipped":599,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:51.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 23:40:51.887: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 23:40:53.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566851, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566851, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566851, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566851, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 23:40:56.915: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:40:57.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4491" for this suite. STEP: Destroying namespace "webhook-4491-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.879 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":46,"skipped":604,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:40:57.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:41:08.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7504" for this suite. • [SLOW TEST:11.190 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":47,"skipped":626,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:41:08.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 11 23:41:08.626: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 11 23:41:08.658: INFO: Waiting for terminating namespaces to be deleted... Mar 11 23:41:08.660: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 11 23:41:08.664: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 11 23:41:08.664: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 23:41:08.664: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 11 23:41:08.664: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 23:41:08.664: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 11 23:41:08.669: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 11 23:41:08.669: INFO: Container coredns ready: true, restart count 0 Mar 11 23:41:08.669: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 11 23:41:08.669: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 23:41:08.669: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 11 23:41:08.669: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-65cdb89b-a8e3-4db6-93be-846e9ab1408c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-65cdb89b-a8e3-4db6-93be-846e9ab1408c off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-65cdb89b-a8e3-4db6-93be-846e9ab1408c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:41:14.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-211" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:6.261 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":280,"completed":48,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:41:14.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0311 23:41:15.910636 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 23:41:15.910: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:41:15.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2073" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":49,"skipped":676,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:41:15.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-6240 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 11 23:41:15.989: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 11 23:41:16.127: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 11 23:41:18.130: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 11 23:41:20.148: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 23:41:22.129: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 23:41:24.146: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 23:41:26.131: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 23:41:28.130: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 23:41:30.131: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 23:41:32.131: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 11 23:41:32.136: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 11 23:41:34.140: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 11 23:41:38.163: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.229:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6240 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 23:41:38.163: INFO: >>> kubeConfig: /root/.kube/config I0311 23:41:38.184925 7 log.go:172] (0xc001df51e0) (0xc001df7180) Create stream I0311 23:41:38.184940 7 log.go:172] (0xc001df51e0) (0xc001df7180) Stream added, broadcasting: 1 I0311 23:41:38.185902 7 log.go:172] (0xc001df51e0) Reply frame received for 1 I0311 23:41:38.185929 7 log.go:172] (0xc001df51e0) (0xc00281e0a0) Create stream I0311 23:41:38.185939 7 log.go:172] (0xc001df51e0) (0xc00281e0a0) Stream added, broadcasting: 3 I0311 23:41:38.186604 7 log.go:172] (0xc001df51e0) Reply frame received for 3 I0311 23:41:38.186619 7 log.go:172] (0xc001df51e0) (0xc001df7400) Create stream I0311 23:41:38.186625 7 log.go:172] (0xc001df51e0) (0xc001df7400) Stream added, broadcasting: 5 I0311 23:41:38.187271 7 log.go:172] (0xc001df51e0) Reply frame received for 5 I0311 23:41:38.249116 7 log.go:172] (0xc001df51e0) Data frame received for 5 I0311 23:41:38.249148 7 log.go:172] (0xc001df7400) (5) Data frame handling I0311 23:41:38.249165 7 log.go:172] (0xc001df51e0) Data frame received for 3 I0311 23:41:38.249173 7 log.go:172] (0xc00281e0a0) (3) Data frame handling I0311 23:41:38.249180 7 log.go:172] (0xc00281e0a0) (3) Data frame sent I0311 23:41:38.249188 7 log.go:172] (0xc001df51e0) Data frame received for 3 I0311 23:41:38.249195 7 log.go:172] (0xc00281e0a0) (3) Data frame handling I0311 23:41:38.250329 7 log.go:172] (0xc001df51e0) Data frame received for 1 I0311 23:41:38.250342 7 log.go:172] (0xc001df7180) (1) Data frame handling I0311 23:41:38.250350 7 log.go:172] (0xc001df7180) (1) Data frame sent I0311 23:41:38.250515 7 log.go:172] (0xc001df51e0) (0xc001df7180) Stream removed, broadcasting: 1 I0311 23:41:38.250541 7 log.go:172] (0xc001df51e0) Go away received I0311 23:41:38.250636 7 log.go:172] (0xc001df51e0) (0xc001df7180) Stream removed, broadcasting: 1 I0311 23:41:38.250661 7 log.go:172] (0xc001df51e0) (0xc00281e0a0) Stream removed, broadcasting: 3 I0311 23:41:38.250674 7 log.go:172] (0xc001df51e0) (0xc001df7400) Stream removed, broadcasting: 5 Mar 11 23:41:38.250: INFO: Found all expected endpoints: [netserver-0] Mar 11 23:41:38.253: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.218:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6240 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 23:41:38.253: INFO: >>> kubeConfig: /root/.kube/config I0311 23:41:38.275664 7 log.go:172] (0xc001df5760) (0xc001df7680) Create stream I0311 23:41:38.275689 7 log.go:172] (0xc001df5760) (0xc001df7680) Stream added, broadcasting: 1 I0311 23:41:38.276902 7 log.go:172] (0xc001df5760) Reply frame received for 1 I0311 23:41:38.276935 7 log.go:172] (0xc001df5760) (0xc001df7860) Create stream I0311 23:41:38.276946 7 log.go:172] (0xc001df5760) (0xc001df7860) Stream added, broadcasting: 3 I0311 23:41:38.277467 7 log.go:172] (0xc001df5760) Reply frame received for 3 I0311 23:41:38.277485 7 log.go:172] (0xc001df5760) (0xc001df79a0) Create stream I0311 23:41:38.277492 7 log.go:172] (0xc001df5760) (0xc001df79a0) Stream added, broadcasting: 5 I0311 23:41:38.278017 7 log.go:172] (0xc001df5760) Reply frame received for 5 I0311 23:41:38.334212 7 log.go:172] (0xc001df5760) Data frame received for 3 I0311 23:41:38.334241 7 log.go:172] (0xc001df7860) (3) Data frame handling I0311 23:41:38.334256 7 log.go:172] (0xc001df7860) (3) Data frame sent I0311 23:41:38.334339 7 log.go:172] (0xc001df5760) Data frame received for 3 I0311 23:41:38.334362 7 log.go:172] (0xc001df7860) (3) Data frame handling I0311 23:41:38.334416 7 log.go:172] (0xc001df5760) Data frame received for 5 I0311 23:41:38.334434 7 log.go:172] (0xc001df79a0) (5) Data frame handling I0311 23:41:38.335476 7 log.go:172] (0xc001df5760) Data frame received for 1 I0311 23:41:38.335492 7 log.go:172] (0xc001df7680) (1) Data frame handling I0311 23:41:38.335508 7 log.go:172] (0xc001df7680) (1) Data frame sent I0311 23:41:38.335519 7 log.go:172] (0xc001df5760) (0xc001df7680) Stream removed, broadcasting: 1 I0311 23:41:38.335538 7 log.go:172] (0xc001df5760) Go away received I0311 23:41:38.335621 7 log.go:172] (0xc001df5760) (0xc001df7680) Stream removed, broadcasting: 1 I0311 23:41:38.335639 7 log.go:172] (0xc001df5760) (0xc001df7860) Stream removed, broadcasting: 3 I0311 23:41:38.335653 7 log.go:172] (0xc001df5760) (0xc001df79a0) Stream removed, broadcasting: 5 Mar 11 23:41:38.335: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:41:38.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6240" for this suite. • [SLOW TEST:22.439 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":50,"skipped":685,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:41:38.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 23:41:38.797: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 23:41:40.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566898, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566898, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566898, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719566898, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 23:41:43.847: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:41:43.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9880" for this suite. STEP: Destroying namespace "webhook-9880-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.626 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":51,"skipped":686,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:41:43.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 11 23:41:52.146: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 23:41:52.157: INFO: Pod pod-with-poststart-http-hook still exists Mar 11 23:41:54.157: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 23:41:54.160: INFO: Pod pod-with-poststart-http-hook still exists Mar 11 23:41:56.157: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 23:41:56.161: INFO: Pod pod-with-poststart-http-hook still exists Mar 11 23:41:58.157: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 23:41:58.159: INFO: Pod pod-with-poststart-http-hook still exists Mar 11 23:42:00.157: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 23:42:00.161: INFO: Pod pod-with-poststart-http-hook still exists Mar 11 23:42:02.157: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 23:42:02.159: INFO: Pod pod-with-poststart-http-hook still exists Mar 11 23:42:04.157: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 23:42:04.160: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:42:04.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7231" for this suite. • [SLOW TEST:20.188 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":52,"skipped":732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:42:04.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-3d4183d4-95a6-4a1e-8f22-decb56b355ea in namespace container-probe-6481 Mar 11 23:42:06.316: INFO: Started pod liveness-3d4183d4-95a6-4a1e-8f22-decb56b355ea in namespace container-probe-6481 STEP: checking the pod's current state and verifying that restartCount is present Mar 11 23:42:06.318: INFO: Initial restart count of pod liveness-3d4183d4-95a6-4a1e-8f22-decb56b355ea is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:46:07.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6481" for this suite. • [SLOW TEST:243.277 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":53,"skipped":772,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:46:07.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 11 23:46:07.518: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:46:10.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2980" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":54,"skipped":781,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:46:10.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 11 23:46:10.765: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d93e5cd2-658a-45b3-85c4-31238dcd7f62" in namespace "projected-4516" to be "success or failure" Mar 11 23:46:10.799: INFO: Pod "downwardapi-volume-d93e5cd2-658a-45b3-85c4-31238dcd7f62": Phase="Pending", Reason="", readiness=false. Elapsed: 33.259839ms Mar 11 23:46:12.802: INFO: Pod "downwardapi-volume-d93e5cd2-658a-45b3-85c4-31238dcd7f62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.03654173s STEP: Saw pod success Mar 11 23:46:12.802: INFO: Pod "downwardapi-volume-d93e5cd2-658a-45b3-85c4-31238dcd7f62" satisfied condition "success or failure" Mar 11 23:46:12.804: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d93e5cd2-658a-45b3-85c4-31238dcd7f62 container client-container: STEP: delete the pod Mar 11 23:46:12.828: INFO: Waiting for pod downwardapi-volume-d93e5cd2-658a-45b3-85c4-31238dcd7f62 to disappear Mar 11 23:46:12.833: INFO: Pod downwardapi-volume-d93e5cd2-658a-45b3-85c4-31238dcd7f62 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:46:12.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4516" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":55,"skipped":785,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:46:12.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-8dc9a04b-2c40-4645-b126-59dfa73763be STEP: Creating a pod to test consume secrets Mar 11 23:46:12.937: INFO: Waiting up to 5m0s for pod "pod-secrets-778eb1f6-eab1-477a-9d57-6d020dcc6ae7" in namespace "secrets-8111" to be "success or failure" Mar 11 23:46:12.947: INFO: Pod "pod-secrets-778eb1f6-eab1-477a-9d57-6d020dcc6ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093761ms Mar 11 23:46:14.951: INFO: Pod "pod-secrets-778eb1f6-eab1-477a-9d57-6d020dcc6ae7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014298543s STEP: Saw pod success Mar 11 23:46:14.951: INFO: Pod "pod-secrets-778eb1f6-eab1-477a-9d57-6d020dcc6ae7" satisfied condition "success or failure" Mar 11 23:46:14.954: INFO: Trying to get logs from node latest-worker pod pod-secrets-778eb1f6-eab1-477a-9d57-6d020dcc6ae7 container secret-volume-test: STEP: delete the pod Mar 11 23:46:14.972: INFO: Waiting for pod pod-secrets-778eb1f6-eab1-477a-9d57-6d020dcc6ae7 to disappear Mar 11 23:46:14.990: INFO: Pod pod-secrets-778eb1f6-eab1-477a-9d57-6d020dcc6ae7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:46:14.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8111" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":56,"skipped":789,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:46:14.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 23:46:15.822: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 23:46:17.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567175, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567175, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567175, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567175, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 23:46:20.876: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:46:21.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4984" for this suite. STEP: Destroying namespace "webhook-4984-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.066 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":57,"skipped":826,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:46:21.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 11 23:46:21.145: INFO: PodSpec: initContainers in spec.initContainers Mar 11 23:47:03.698: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7e0b2a20-9a04-408a-a7a5-7b3ffd68b3fa", GenerateName:"", Namespace:"init-container-7154", SelfLink:"/api/v1/namespaces/init-container-7154/pods/pod-init-7e0b2a20-9a04-408a-a7a5-7b3ffd68b3fa", UID:"d173fa5a-b328-45e8-96e5-c357e1ea5f10", ResourceVersion:"930058", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719567181, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"145836101"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lrls8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0052ea000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lrls8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lrls8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lrls8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002d9c068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028e6000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d9c0f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d9c110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002d9c118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002d9c11c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.16", PodIP:"10.244.1.239", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.239"}}, StartTime:(*v1.Time)(0xc003346080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0012920e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001292150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://da304a6ac3fbfaa93d19bc1af047a2c17212bb8dcfb2a4329b6ab596e85c3646", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0033460c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0033460a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002d9c19f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:47:03.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7154" for this suite. • [SLOW TEST:42.730 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":58,"skipped":840,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:47:03.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:47:03.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3700' Mar 11 23:47:04.557: INFO: stderr: "" Mar 11 23:47:04.557: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 11 23:47:04.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3700' Mar 11 23:47:06.239: INFO: stderr: "" Mar 11 23:47:06.239: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 11 23:47:07.243: INFO: Selector matched 1 pods for map[app:agnhost] Mar 11 23:47:07.243: INFO: Found 1 / 1 Mar 11 23:47:07.243: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 11 23:47:07.245: INFO: Selector matched 1 pods for map[app:agnhost] Mar 11 23:47:07.245: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 11 23:47:07.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe pod agnhost-master-5sx9g --namespace=kubectl-3700' Mar 11 23:47:08.050: INFO: stderr: "" Mar 11 23:47:08.050: INFO: stdout: "Name: agnhost-master-5sx9g\nNamespace: kubectl-3700\nPriority: 0\nNode: latest-worker/172.17.0.16\nStart Time: Wed, 11 Mar 2020 23:47:04 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.240\nIPs:\n IP: 10.244.1.240\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://ff40979b4c1ea4fc1ebccd06390f5a57cf439fb7b462c631016d81391783b5f8\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 11 Mar 2020 23:47:06 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hcrxj (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hcrxj:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hcrxj\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-3700/agnhost-master-5sx9g to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 2s kubelet, latest-worker Started container agnhost-master\n" Mar 11 23:47:08.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3700' Mar 11 23:47:08.176: INFO: stderr: "" Mar 11 23:47:08.176: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3700\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-5sx9g\n" Mar 11 23:47:08.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3700' Mar 11 23:47:08.268: INFO: stderr: "" Mar 11 23:47:08.269: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3700\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.240.92\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.240:6379\nSession Affinity: None\nEvents: \n" Mar 11 23:47:08.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe node latest-control-plane' Mar 11 23:47:08.391: INFO: stderr: "" Mar 11 23:47:08.391: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:49:22 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Wed, 11 Mar 2020 23:47:05 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 11 Mar 2020 23:42:12 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 11 Mar 2020 23:42:12 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 11 Mar 2020 23:42:12 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 11 Mar 2020 23:42:12 +0000 Sun, 08 Mar 2020 14:50:16 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.17\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: fb03af8223ea4430b6faaad8b31da5e5\n System UUID: 220fc748-c3b9-4de4-aa76-4a3520169f00\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (8 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-gxrvh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 3d8h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d8h\n kube-system kindnet-gp8bt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 3d8h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 3d8h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 3d8h\n kube-system kube-proxy-nxxmk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d8h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3d8h\n local-path-storage local-path-provisioner-7745554f7f-52xw4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d8h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (4%) 100m (0%)\n memory 120Mi (0%) 220Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 11 23:47:08.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe namespace kubectl-3700' Mar 11 23:47:08.470: INFO: stderr: "" Mar 11 23:47:08.470: INFO: stdout: "Name: kubectl-3700\nLabels: e2e-framework=kubectl\n e2e-run=a4c78ec9-80ba-47d6-8b28-fe2c2fcf9aba\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:47:08.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3700" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":280,"completed":59,"skipped":845,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:47:08.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:47:08.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9048" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":60,"skipped":846,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:47:08.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:47:19.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8421" for this suite. • [SLOW TEST:11.149 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":61,"skipped":849,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:47:19.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 11 23:47:19.898: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 11 23:47:19.908: INFO: Waiting for terminating namespaces to be deleted... Mar 11 23:47:19.911: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 11 23:47:19.916: INFO: agnhost-master-5sx9g from kubectl-3700 started at 2020-03-11 23:47:04 +0000 UTC (1 container statuses recorded) Mar 11 23:47:19.916: INFO: Container agnhost-master ready: false, restart count 0 Mar 11 23:47:19.916: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 11 23:47:19.916: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 23:47:19.916: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 11 23:47:19.916: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 23:47:19.916: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 11 23:47:19.934: INFO: pod-qos-class-90b267b6-1217-456d-ad0d-09965d086a21 from pods-9048 started at 2020-03-11 23:47:08 +0000 UTC (1 container statuses recorded) Mar 11 23:47:19.934: INFO: Container agnhost ready: false, restart count 0 Mar 11 23:47:19.934: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 11 23:47:19.934: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 23:47:19.934: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 11 23:47:19.934: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 23:47:19.934: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 11 23:47:19.934: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fb64b90fe34234], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:47:20.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8291" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":280,"completed":62,"skipped":965,"failed":0} SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:47:20.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:47:21.008: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:47:23.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9452" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":63,"skipped":968,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:47:23.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 11 23:47:29.242: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 23:47:29.279: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 23:47:31.279: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 23:47:31.283: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 23:47:33.279: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 23:47:33.282: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 23:47:35.279: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 23:47:35.282: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 23:47:37.279: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 23:47:37.282: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 23:47:39.279: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 23:47:39.283: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 23:47:41.279: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 23:47:41.283: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 23:47:43.279: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 23:47:43.283: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:47:43.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5396" for this suite. • [SLOW TEST:20.157 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":64,"skipped":995,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:47:43.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:47:56.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9278" for this suite. • [SLOW TEST:13.177 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":65,"skipped":1007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:47:56.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:47:59.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7471" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":66,"skipped":1060,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:47:59.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:47:59.758: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 11 23:47:59.764: INFO: Number of nodes with available pods: 0 Mar 11 23:47:59.764: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 11 23:47:59.829: INFO: Number of nodes with available pods: 0 Mar 11 23:47:59.829: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:00.842: INFO: Number of nodes with available pods: 0 Mar 11 23:48:00.842: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:01.832: INFO: Number of nodes with available pods: 1 Mar 11 23:48:01.832: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 11 23:48:01.884: INFO: Number of nodes with available pods: 1 Mar 11 23:48:01.884: INFO: Number of running nodes: 0, number of available pods: 1 Mar 11 23:48:02.887: INFO: Number of nodes with available pods: 0 Mar 11 23:48:02.887: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 11 23:48:02.902: INFO: Number of nodes with available pods: 0 Mar 11 23:48:02.902: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:03.906: INFO: Number of nodes with available pods: 0 Mar 11 23:48:03.906: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:04.906: INFO: Number of nodes with available pods: 0 Mar 11 23:48:04.906: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:05.908: INFO: Number of nodes with available pods: 0 Mar 11 23:48:05.908: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:06.905: INFO: Number of nodes with available pods: 0 Mar 11 23:48:06.905: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:07.906: INFO: Number of nodes with available pods: 0 Mar 11 23:48:07.906: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:08.915: INFO: Number of nodes with available pods: 0 Mar 11 23:48:08.915: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:09.905: INFO: Number of nodes with available pods: 0 Mar 11 23:48:09.905: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:10.907: INFO: Number of nodes with available pods: 0 Mar 11 23:48:10.907: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:11.906: INFO: Number of nodes with available pods: 0 Mar 11 23:48:11.906: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:12.906: INFO: Number of nodes with available pods: 0 Mar 11 23:48:12.906: INFO: Node latest-worker2 is running more than one daemon pod Mar 11 23:48:13.906: INFO: Number of nodes with available pods: 1 Mar 11 23:48:13.906: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1631, will wait for the garbage collector to delete the pods Mar 11 23:48:13.968: INFO: Deleting DaemonSet.extensions daemon-set took: 5.57802ms Mar 11 23:48:14.268: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.204695ms Mar 11 23:48:22.172: INFO: Number of nodes with available pods: 0 Mar 11 23:48:22.172: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 23:48:22.174: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1631/daemonsets","resourceVersion":"930581"},"items":null} Mar 11 23:48:22.176: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1631/pods","resourceVersion":"930581"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:48:22.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1631" for this suite. • [SLOW TEST:22.649 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":67,"skipped":1064,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:48:22.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token STEP: reading a file in the container Mar 11 23:48:24.834: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6785 pod-service-account-9c2aa651-9ff0-4144-b5d7-842d73deaa2c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 11 23:48:25.030: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6785 pod-service-account-9c2aa651-9ff0-4144-b5d7-842d73deaa2c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 11 23:48:25.303: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6785 pod-service-account-9c2aa651-9ff0-4144-b5d7-842d73deaa2c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:48:25.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6785" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":280,"completed":68,"skipped":1080,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:48:25.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 11 23:48:25.539: INFO: >>> kubeConfig: /root/.kube/config Mar 11 23:48:27.380: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:48:37.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5536" for this suite. • [SLOW TEST:12.035 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":69,"skipped":1081,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:48:37.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-36f65d6e-c8b5-4aa0-a7f9-7c449f7246f4 STEP: Creating a pod to test consume configMaps Mar 11 23:48:37.575: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8126b78-7f40-4bf7-a079-055e760be3ad" in namespace "configmap-8280" to be "success or failure" Mar 11 23:48:37.594: INFO: Pod "pod-configmaps-d8126b78-7f40-4bf7-a079-055e760be3ad": Phase="Pending", Reason="", readiness=false. Elapsed: 19.273732ms Mar 11 23:48:39.598: INFO: Pod "pod-configmaps-d8126b78-7f40-4bf7-a079-055e760be3ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023088956s STEP: Saw pod success Mar 11 23:48:39.598: INFO: Pod "pod-configmaps-d8126b78-7f40-4bf7-a079-055e760be3ad" satisfied condition "success or failure" Mar 11 23:48:39.601: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d8126b78-7f40-4bf7-a079-055e760be3ad container configmap-volume-test: STEP: delete the pod Mar 11 23:48:39.630: INFO: Waiting for pod pod-configmaps-d8126b78-7f40-4bf7-a079-055e760be3ad to disappear Mar 11 23:48:39.634: INFO: Pod pod-configmaps-d8126b78-7f40-4bf7-a079-055e760be3ad no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:48:39.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8280" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":70,"skipped":1090,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:48:39.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3523 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating stateful set ss in namespace statefulset-3523 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3523 Mar 11 23:48:39.753: INFO: Found 0 stateful pods, waiting for 1 Mar 11 23:48:49.758: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 11 23:48:49.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 11 23:48:50.008: INFO: stderr: "I0311 23:48:49.902765 730 log.go:172] (0xc000bdd130) (0xc000910640) Create stream\nI0311 23:48:49.902819 730 log.go:172] (0xc000bdd130) (0xc000910640) Stream added, broadcasting: 1\nI0311 23:48:49.906164 730 log.go:172] (0xc000bdd130) Reply frame received for 1\nI0311 23:48:49.906222 730 log.go:172] (0xc000bdd130) (0xc0009106e0) Create stream\nI0311 23:48:49.906231 730 log.go:172] (0xc000bdd130) (0xc0009106e0) Stream added, broadcasting: 3\nI0311 23:48:49.907219 730 log.go:172] (0xc000bdd130) Reply frame received for 3\nI0311 23:48:49.907721 730 log.go:172] (0xc000bdd130) (0xc0009fc280) Create stream\nI0311 23:48:49.907747 730 log.go:172] (0xc000bdd130) (0xc0009fc280) Stream added, broadcasting: 5\nI0311 23:48:49.910395 730 log.go:172] (0xc000bdd130) Reply frame received for 5\nI0311 23:48:49.980647 730 log.go:172] (0xc000bdd130) Data frame received for 5\nI0311 23:48:49.980684 730 log.go:172] (0xc0009fc280) (5) Data frame handling\nI0311 23:48:49.980720 730 log.go:172] (0xc0009fc280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0311 23:48:50.003218 730 log.go:172] (0xc000bdd130) Data frame received for 3\nI0311 23:48:50.003237 730 log.go:172] (0xc0009106e0) (3) Data frame handling\nI0311 23:48:50.003251 730 log.go:172] (0xc0009106e0) (3) Data frame sent\nI0311 23:48:50.003257 730 log.go:172] (0xc000bdd130) Data frame received for 3\nI0311 23:48:50.003260 730 log.go:172] (0xc0009106e0) (3) Data frame handling\nI0311 23:48:50.003589 730 log.go:172] (0xc000bdd130) Data frame received for 5\nI0311 23:48:50.003612 730 log.go:172] (0xc0009fc280) (5) Data frame handling\nI0311 23:48:50.004870 730 log.go:172] (0xc000bdd130) Data frame received for 1\nI0311 23:48:50.004904 730 log.go:172] (0xc000910640) (1) Data frame handling\nI0311 23:48:50.004951 730 log.go:172] (0xc000910640) (1) Data frame sent\nI0311 23:48:50.004975 730 log.go:172] (0xc000bdd130) (0xc000910640) Stream removed, broadcasting: 1\nI0311 23:48:50.004996 730 log.go:172] (0xc000bdd130) Go away received\nI0311 23:48:50.005316 730 log.go:172] (0xc000bdd130) (0xc000910640) Stream removed, broadcasting: 1\nI0311 23:48:50.005334 730 log.go:172] (0xc000bdd130) (0xc0009106e0) Stream removed, broadcasting: 3\nI0311 23:48:50.005341 730 log.go:172] (0xc000bdd130) (0xc0009fc280) Stream removed, broadcasting: 5\n" Mar 11 23:48:50.008: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 11 23:48:50.008: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 11 23:48:50.012: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 11 23:49:00.017: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 11 23:49:00.017: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 23:49:00.042: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:00.042: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:48:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:48:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:48:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:48:39 +0000 UTC }] Mar 11 23:49:00.042: INFO: Mar 11 23:49:00.042: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 11 23:49:01.046: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993859932s Mar 11 23:49:02.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989210697s Mar 11 23:49:03.063: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.976582108s Mar 11 23:49:04.071: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.972751907s Mar 11 23:49:05.075: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.964919132s Mar 11 23:49:06.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.960880874s Mar 11 23:49:07.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.95291654s Mar 11 23:49:08.091: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.948759014s Mar 11 23:49:09.095: INFO: Verifying statefulset ss doesn't scale past 3 for another 945.227901ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3523 Mar 11 23:49:10.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:49:10.279: INFO: stderr: "I0311 23:49:10.220687 749 log.go:172] (0xc000a2f810) (0xc000b34960) Create stream\nI0311 23:49:10.220733 749 log.go:172] (0xc000a2f810) (0xc000b34960) Stream added, broadcasting: 1\nI0311 23:49:10.223466 749 log.go:172] (0xc000a2f810) Reply frame received for 1\nI0311 23:49:10.223492 749 log.go:172] (0xc000a2f810) (0xc0006186e0) Create stream\nI0311 23:49:10.223499 749 log.go:172] (0xc000a2f810) (0xc0006186e0) Stream added, broadcasting: 3\nI0311 23:49:10.224022 749 log.go:172] (0xc000a2f810) Reply frame received for 3\nI0311 23:49:10.224045 749 log.go:172] (0xc000a2f810) (0xc000403360) Create stream\nI0311 23:49:10.224052 749 log.go:172] (0xc000a2f810) (0xc000403360) Stream added, broadcasting: 5\nI0311 23:49:10.224510 749 log.go:172] (0xc000a2f810) Reply frame received for 5\nI0311 23:49:10.276233 749 log.go:172] (0xc000a2f810) Data frame received for 3\nI0311 23:49:10.276255 749 log.go:172] (0xc0006186e0) (3) Data frame handling\nI0311 23:49:10.276262 749 log.go:172] (0xc0006186e0) (3) Data frame sent\nI0311 23:49:10.276267 749 log.go:172] (0xc000a2f810) Data frame received for 3\nI0311 23:49:10.276271 749 log.go:172] (0xc0006186e0) (3) Data frame handling\nI0311 23:49:10.276278 749 log.go:172] (0xc000a2f810) Data frame received for 5\nI0311 23:49:10.276286 749 log.go:172] (0xc000403360) (5) Data frame handling\nI0311 23:49:10.276292 749 log.go:172] (0xc000403360) (5) Data frame sent\nI0311 23:49:10.276296 749 log.go:172] (0xc000a2f810) Data frame received for 5\nI0311 23:49:10.276300 749 log.go:172] (0xc000403360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0311 23:49:10.277245 749 log.go:172] (0xc000a2f810) Data frame received for 1\nI0311 23:49:10.277257 749 log.go:172] (0xc000b34960) (1) Data frame handling\nI0311 23:49:10.277268 749 log.go:172] (0xc000b34960) (1) Data frame sent\nI0311 23:49:10.277277 749 log.go:172] (0xc000a2f810) (0xc000b34960) Stream removed, broadcasting: 1\nI0311 23:49:10.277349 749 log.go:172] (0xc000a2f810) Go away received\nI0311 23:49:10.277658 749 log.go:172] (0xc000a2f810) (0xc000b34960) Stream removed, broadcasting: 1\nI0311 23:49:10.277673 749 log.go:172] (0xc000a2f810) (0xc0006186e0) Stream removed, broadcasting: 3\nI0311 23:49:10.277681 749 log.go:172] (0xc000a2f810) (0xc000403360) Stream removed, broadcasting: 5\n" Mar 11 23:49:10.280: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 11 23:49:10.280: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 11 23:49:10.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:49:10.419: INFO: stderr: "I0311 23:49:10.367365 768 log.go:172] (0xc000914a50) (0xc0005f9b80) Create stream\nI0311 23:49:10.367408 768 log.go:172] (0xc000914a50) (0xc0005f9b80) Stream added, broadcasting: 1\nI0311 23:49:10.369415 768 log.go:172] (0xc000914a50) Reply frame received for 1\nI0311 23:49:10.369446 768 log.go:172] (0xc000914a50) (0xc00065a000) Create stream\nI0311 23:49:10.369454 768 log.go:172] (0xc000914a50) (0xc00065a000) Stream added, broadcasting: 3\nI0311 23:49:10.370170 768 log.go:172] (0xc000914a50) Reply frame received for 3\nI0311 23:49:10.370195 768 log.go:172] (0xc000914a50) (0xc0005f9c20) Create stream\nI0311 23:49:10.370205 768 log.go:172] (0xc000914a50) (0xc0005f9c20) Stream added, broadcasting: 5\nI0311 23:49:10.370735 768 log.go:172] (0xc000914a50) Reply frame received for 5\nI0311 23:49:10.415471 768 log.go:172] (0xc000914a50) Data frame received for 5\nI0311 23:49:10.415503 768 log.go:172] (0xc0005f9c20) (5) Data frame handling\nI0311 23:49:10.415511 768 log.go:172] (0xc0005f9c20) (5) Data frame sent\nI0311 23:49:10.415518 768 log.go:172] (0xc000914a50) Data frame received for 5\nI0311 23:49:10.415522 768 log.go:172] (0xc0005f9c20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0311 23:49:10.415537 768 log.go:172] (0xc000914a50) Data frame received for 3\nI0311 23:49:10.415546 768 log.go:172] (0xc00065a000) (3) Data frame handling\nI0311 23:49:10.415551 768 log.go:172] (0xc00065a000) (3) Data frame sent\nI0311 23:49:10.415555 768 log.go:172] (0xc000914a50) Data frame received for 3\nI0311 23:49:10.415561 768 log.go:172] (0xc00065a000) (3) Data frame handling\nI0311 23:49:10.416273 768 log.go:172] (0xc000914a50) Data frame received for 1\nI0311 23:49:10.416282 768 log.go:172] (0xc0005f9b80) (1) Data frame handling\nI0311 23:49:10.416292 768 log.go:172] (0xc0005f9b80) (1) Data frame sent\nI0311 23:49:10.416303 768 log.go:172] (0xc000914a50) (0xc0005f9b80) Stream removed, broadcasting: 1\nI0311 23:49:10.416314 768 log.go:172] (0xc000914a50) Go away received\nI0311 23:49:10.416598 768 log.go:172] (0xc000914a50) (0xc0005f9b80) Stream removed, broadcasting: 1\nI0311 23:49:10.416615 768 log.go:172] (0xc000914a50) (0xc00065a000) Stream removed, broadcasting: 3\nI0311 23:49:10.416621 768 log.go:172] (0xc000914a50) (0xc0005f9c20) Stream removed, broadcasting: 5\n" Mar 11 23:49:10.419: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 11 23:49:10.419: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 11 23:49:10.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:49:10.579: INFO: stderr: "I0311 23:49:10.505171 791 log.go:172] (0xc00003b600) (0xc00060b900) Create stream\nI0311 23:49:10.505216 791 log.go:172] (0xc00003b600) (0xc00060b900) Stream added, broadcasting: 1\nI0311 23:49:10.506888 791 log.go:172] (0xc00003b600) Reply frame received for 1\nI0311 23:49:10.506910 791 log.go:172] (0xc00003b600) (0xc000acc000) Create stream\nI0311 23:49:10.506917 791 log.go:172] (0xc00003b600) (0xc000acc000) Stream added, broadcasting: 3\nI0311 23:49:10.507484 791 log.go:172] (0xc00003b600) Reply frame received for 3\nI0311 23:49:10.507510 791 log.go:172] (0xc00003b600) (0xc00060bae0) Create stream\nI0311 23:49:10.507518 791 log.go:172] (0xc00003b600) (0xc00060bae0) Stream added, broadcasting: 5\nI0311 23:49:10.508202 791 log.go:172] (0xc00003b600) Reply frame received for 5\nI0311 23:49:10.575406 791 log.go:172] (0xc00003b600) Data frame received for 5\nI0311 23:49:10.575432 791 log.go:172] (0xc00060bae0) (5) Data frame handling\nI0311 23:49:10.575440 791 log.go:172] (0xc00060bae0) (5) Data frame sent\nI0311 23:49:10.575447 791 log.go:172] (0xc00003b600) Data frame received for 5\nI0311 23:49:10.575452 791 log.go:172] (0xc00060bae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0311 23:49:10.575468 791 log.go:172] (0xc00003b600) Data frame received for 3\nI0311 23:49:10.575472 791 log.go:172] (0xc000acc000) (3) Data frame handling\nI0311 23:49:10.575480 791 log.go:172] (0xc000acc000) (3) Data frame sent\nI0311 23:49:10.575487 791 log.go:172] (0xc00003b600) Data frame received for 3\nI0311 23:49:10.575496 791 log.go:172] (0xc000acc000) (3) Data frame handling\nI0311 23:49:10.576345 791 log.go:172] (0xc00003b600) Data frame received for 1\nI0311 23:49:10.576361 791 log.go:172] (0xc00060b900) (1) Data frame handling\nI0311 23:49:10.576370 791 log.go:172] (0xc00060b900) (1) Data frame sent\nI0311 23:49:10.576384 791 log.go:172] (0xc00003b600) (0xc00060b900) Stream removed, broadcasting: 1\nI0311 23:49:10.576397 791 log.go:172] (0xc00003b600) Go away received\nI0311 23:49:10.576683 791 log.go:172] (0xc00003b600) (0xc00060b900) Stream removed, broadcasting: 1\nI0311 23:49:10.576696 791 log.go:172] (0xc00003b600) (0xc000acc000) Stream removed, broadcasting: 3\nI0311 23:49:10.576702 791 log.go:172] (0xc00003b600) (0xc00060bae0) Stream removed, broadcasting: 5\n" Mar 11 23:49:10.579: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 11 23:49:10.579: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 11 23:49:10.582: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 11 23:49:20.586: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 23:49:20.586: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 23:49:20.586: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 11 23:49:20.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 11 23:49:20.765: INFO: stderr: "I0311 23:49:20.698232 813 log.go:172] (0xc0003c5340) (0xc0009ae000) Create stream\nI0311 23:49:20.698290 813 log.go:172] (0xc0003c5340) (0xc0009ae000) Stream added, broadcasting: 1\nI0311 23:49:20.701361 813 log.go:172] (0xc0003c5340) Reply frame received for 1\nI0311 23:49:20.701398 813 log.go:172] (0xc0003c5340) (0xc0006f9b80) Create stream\nI0311 23:49:20.701406 813 log.go:172] (0xc0003c5340) (0xc0006f9b80) Stream added, broadcasting: 3\nI0311 23:49:20.702439 813 log.go:172] (0xc0003c5340) Reply frame received for 3\nI0311 23:49:20.702467 813 log.go:172] (0xc0003c5340) (0xc000206000) Create stream\nI0311 23:49:20.702480 813 log.go:172] (0xc0003c5340) (0xc000206000) Stream added, broadcasting: 5\nI0311 23:49:20.703409 813 log.go:172] (0xc0003c5340) Reply frame received for 5\nI0311 23:49:20.760780 813 log.go:172] (0xc0003c5340) Data frame received for 3\nI0311 23:49:20.760803 813 log.go:172] (0xc0006f9b80) (3) Data frame handling\nI0311 23:49:20.760822 813 log.go:172] (0xc0006f9b80) (3) Data frame sent\nI0311 23:49:20.760829 813 log.go:172] (0xc0003c5340) Data frame received for 3\nI0311 23:49:20.760834 813 log.go:172] (0xc0006f9b80) (3) Data frame handling\nI0311 23:49:20.760898 813 log.go:172] (0xc0003c5340) Data frame received for 5\nI0311 23:49:20.760914 813 log.go:172] (0xc000206000) (5) Data frame handling\nI0311 23:49:20.760927 813 log.go:172] (0xc000206000) (5) Data frame sent\nI0311 23:49:20.760932 813 log.go:172] (0xc0003c5340) Data frame received for 5\nI0311 23:49:20.760937 813 log.go:172] (0xc000206000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0311 23:49:20.762540 813 log.go:172] (0xc0003c5340) Data frame received for 1\nI0311 23:49:20.762558 813 log.go:172] (0xc0009ae000) (1) Data frame handling\nI0311 23:49:20.762567 813 log.go:172] (0xc0009ae000) (1) Data frame sent\nI0311 23:49:20.762577 813 log.go:172] (0xc0003c5340) (0xc0009ae000) Stream removed, broadcasting: 1\nI0311 23:49:20.762635 813 log.go:172] (0xc0003c5340) Go away received\nI0311 23:49:20.762848 813 log.go:172] (0xc0003c5340) (0xc0009ae000) Stream removed, broadcasting: 1\nI0311 23:49:20.762863 813 log.go:172] (0xc0003c5340) (0xc0006f9b80) Stream removed, broadcasting: 3\nI0311 23:49:20.762871 813 log.go:172] (0xc0003c5340) (0xc000206000) Stream removed, broadcasting: 5\n" Mar 11 23:49:20.765: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 11 23:49:20.765: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 11 23:49:20.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 11 23:49:20.976: INFO: stderr: "I0311 23:49:20.875111 834 log.go:172] (0xc000be64d0) (0xc000bde0a0) Create stream\nI0311 23:49:20.875156 834 log.go:172] (0xc000be64d0) (0xc000bde0a0) Stream added, broadcasting: 1\nI0311 23:49:20.876714 834 log.go:172] (0xc000be64d0) Reply frame received for 1\nI0311 23:49:20.876737 834 log.go:172] (0xc000be64d0) (0xc000c0e0a0) Create stream\nI0311 23:49:20.876743 834 log.go:172] (0xc000be64d0) (0xc000c0e0a0) Stream added, broadcasting: 3\nI0311 23:49:20.877394 834 log.go:172] (0xc000be64d0) Reply frame received for 3\nI0311 23:49:20.877428 834 log.go:172] (0xc000be64d0) (0xc000bde140) Create stream\nI0311 23:49:20.877439 834 log.go:172] (0xc000be64d0) (0xc000bde140) Stream added, broadcasting: 5\nI0311 23:49:20.878048 834 log.go:172] (0xc000be64d0) Reply frame received for 5\nI0311 23:49:20.943821 834 log.go:172] (0xc000be64d0) Data frame received for 5\nI0311 23:49:20.943844 834 log.go:172] (0xc000bde140) (5) Data frame handling\nI0311 23:49:20.943857 834 log.go:172] (0xc000bde140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0311 23:49:20.970710 834 log.go:172] (0xc000be64d0) Data frame received for 5\nI0311 23:49:20.970980 834 log.go:172] (0xc000bde140) (5) Data frame handling\nI0311 23:49:20.972280 834 log.go:172] (0xc000be64d0) Data frame received for 3\nI0311 23:49:20.972314 834 log.go:172] (0xc000c0e0a0) (3) Data frame handling\nI0311 23:49:20.972330 834 log.go:172] (0xc000c0e0a0) (3) Data frame sent\nI0311 23:49:20.972345 834 log.go:172] (0xc000be64d0) Data frame received for 3\nI0311 23:49:20.972366 834 log.go:172] (0xc000c0e0a0) (3) Data frame handling\nI0311 23:49:20.972891 834 log.go:172] (0xc000be64d0) Data frame received for 1\nI0311 23:49:20.972909 834 log.go:172] (0xc000bde0a0) (1) Data frame handling\nI0311 23:49:20.972919 834 log.go:172] (0xc000bde0a0) (1) Data frame sent\nI0311 23:49:20.972932 834 log.go:172] (0xc000be64d0) (0xc000bde0a0) Stream removed, broadcasting: 1\nI0311 23:49:20.972943 834 log.go:172] (0xc000be64d0) Go away received\nI0311 23:49:20.973242 834 log.go:172] (0xc000be64d0) (0xc000bde0a0) Stream removed, broadcasting: 1\nI0311 23:49:20.973259 834 log.go:172] (0xc000be64d0) (0xc000c0e0a0) Stream removed, broadcasting: 3\nI0311 23:49:20.973267 834 log.go:172] (0xc000be64d0) (0xc000bde140) Stream removed, broadcasting: 5\n" Mar 11 23:49:20.976: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 11 23:49:20.976: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 11 23:49:20.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 11 23:49:21.173: INFO: stderr: "I0311 23:49:21.088372 855 log.go:172] (0xc00003b080) (0xc00066fb80) Create stream\nI0311 23:49:21.088410 855 log.go:172] (0xc00003b080) (0xc00066fb80) Stream added, broadcasting: 1\nI0311 23:49:21.089925 855 log.go:172] (0xc00003b080) Reply frame received for 1\nI0311 23:49:21.089947 855 log.go:172] (0xc00003b080) (0xc000a24000) Create stream\nI0311 23:49:21.089953 855 log.go:172] (0xc00003b080) (0xc000a24000) Stream added, broadcasting: 3\nI0311 23:49:21.090533 855 log.go:172] (0xc00003b080) Reply frame received for 3\nI0311 23:49:21.090557 855 log.go:172] (0xc00003b080) (0xc00066fc20) Create stream\nI0311 23:49:21.090568 855 log.go:172] (0xc00003b080) (0xc00066fc20) Stream added, broadcasting: 5\nI0311 23:49:21.091152 855 log.go:172] (0xc00003b080) Reply frame received for 5\nI0311 23:49:21.147566 855 log.go:172] (0xc00003b080) Data frame received for 5\nI0311 23:49:21.147588 855 log.go:172] (0xc00066fc20) (5) Data frame handling\nI0311 23:49:21.147601 855 log.go:172] (0xc00066fc20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0311 23:49:21.169226 855 log.go:172] (0xc00003b080) Data frame received for 5\nI0311 23:49:21.169242 855 log.go:172] (0xc00066fc20) (5) Data frame handling\nI0311 23:49:21.169267 855 log.go:172] (0xc00003b080) Data frame received for 3\nI0311 23:49:21.169297 855 log.go:172] (0xc000a24000) (3) Data frame handling\nI0311 23:49:21.169317 855 log.go:172] (0xc000a24000) (3) Data frame sent\nI0311 23:49:21.169327 855 log.go:172] (0xc00003b080) Data frame received for 3\nI0311 23:49:21.169334 855 log.go:172] (0xc000a24000) (3) Data frame handling\nI0311 23:49:21.170363 855 log.go:172] (0xc00003b080) Data frame received for 1\nI0311 23:49:21.170377 855 log.go:172] (0xc00066fb80) (1) Data frame handling\nI0311 23:49:21.170385 855 log.go:172] (0xc00066fb80) (1) Data frame sent\nI0311 23:49:21.170394 855 log.go:172] (0xc00003b080) (0xc00066fb80) Stream removed, broadcasting: 1\nI0311 23:49:21.170408 855 log.go:172] (0xc00003b080) Go away received\nI0311 23:49:21.170776 855 log.go:172] (0xc00003b080) (0xc00066fb80) Stream removed, broadcasting: 1\nI0311 23:49:21.170797 855 log.go:172] (0xc00003b080) (0xc000a24000) Stream removed, broadcasting: 3\nI0311 23:49:21.170806 855 log.go:172] (0xc00003b080) (0xc00066fc20) Stream removed, broadcasting: 5\n" Mar 11 23:49:21.173: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 11 23:49:21.173: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 11 23:49:21.173: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 23:49:21.176: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 11 23:49:31.188: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 11 23:49:31.188: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 11 23:49:31.188: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 11 23:49:31.207: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:31.207: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:48:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:48:39 +0000 UTC }] Mar 11 23:49:31.207: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:31.207: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:31.207: INFO: Mar 11 23:49:31.207: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 23:49:32.211: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:32.211: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:48:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:48:39 +0000 UTC }] Mar 11 23:49:32.211: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:32.211: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:32.211: INFO: Mar 11 23:49:32.211: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 23:49:33.217: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:33.217: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:33.217: INFO: Mar 11 23:49:33.217: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 11 23:49:34.227: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:34.227: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:34.227: INFO: Mar 11 23:49:34.227: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 11 23:49:35.232: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:35.232: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:35.232: INFO: Mar 11 23:49:35.232: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 11 23:49:36.236: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:36.236: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:36.236: INFO: Mar 11 23:49:36.236: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 11 23:49:37.240: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:37.240: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:37.240: INFO: Mar 11 23:49:37.240: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 11 23:49:38.243: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:38.243: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:38.243: INFO: Mar 11 23:49:38.243: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 11 23:49:39.247: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:39.247: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:39.247: INFO: Mar 11 23:49:39.247: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 11 23:49:40.251: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 23:49:40.251: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:49:00 +0000 UTC }] Mar 11 23:49:40.251: INFO: Mar 11 23:49:40.251: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3523 Mar 11 23:49:41.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:49:41.382: INFO: rc: 1 Mar 11 23:49:41.382: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 11 23:49:51.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:49:51.481: INFO: rc: 1 Mar 11 23:49:51.481: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:50:01.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:50:01.559: INFO: rc: 1 Mar 11 23:50:01.559: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:50:11.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:50:11.659: INFO: rc: 1 Mar 11 23:50:11.660: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:50:21.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:50:21.779: INFO: rc: 1 Mar 11 23:50:21.779: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:50:31.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:50:31.890: INFO: rc: 1 Mar 11 23:50:31.890: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:50:41.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:50:42.025: INFO: rc: 1 Mar 11 23:50:42.026: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:50:52.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:50:52.141: INFO: rc: 1 Mar 11 23:50:52.141: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:51:02.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:51:02.247: INFO: rc: 1 Mar 11 23:51:02.247: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:51:12.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:51:12.346: INFO: rc: 1 Mar 11 23:51:12.347: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:51:22.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:51:22.422: INFO: rc: 1 Mar 11 23:51:22.422: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:51:32.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:51:32.533: INFO: rc: 1 Mar 11 23:51:32.533: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:51:42.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:51:42.627: INFO: rc: 1 Mar 11 23:51:42.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:51:52.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:51:52.745: INFO: rc: 1 Mar 11 23:51:52.745: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:52:02.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:52:02.847: INFO: rc: 1 Mar 11 23:52:02.847: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:52:12.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:52:12.941: INFO: rc: 1 Mar 11 23:52:12.941: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:52:22.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:52:23.028: INFO: rc: 1 Mar 11 23:52:23.028: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:52:33.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:52:33.147: INFO: rc: 1 Mar 11 23:52:33.147: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:52:43.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:52:43.252: INFO: rc: 1 Mar 11 23:52:43.252: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:52:53.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:52:53.351: INFO: rc: 1 Mar 11 23:52:53.351: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:53:03.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:53:03.438: INFO: rc: 1 Mar 11 23:53:03.438: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:53:13.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:53:13.541: INFO: rc: 1 Mar 11 23:53:13.541: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:53:23.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:53:23.659: INFO: rc: 1 Mar 11 23:53:23.659: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:53:33.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:53:33.775: INFO: rc: 1 Mar 11 23:53:33.775: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:53:43.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:53:43.868: INFO: rc: 1 Mar 11 23:53:43.868: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:53:53.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:53:53.973: INFO: rc: 1 Mar 11 23:53:53.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:54:03.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:54:04.085: INFO: rc: 1 Mar 11 23:54:04.085: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:54:14.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:54:14.213: INFO: rc: 1 Mar 11 23:54:14.213: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:54:24.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:54:24.277: INFO: rc: 1 Mar 11 23:54:24.277: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:54:34.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:54:34.379: INFO: rc: 1 Mar 11 23:54:34.379: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Mar 11 23:54:44.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3523 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 11 23:54:44.499: INFO: rc: 1 Mar 11 23:54:44.499: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Mar 11 23:54:44.499: INFO: Scaling statefulset ss to 0 Mar 11 23:54:44.507: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 11 23:54:44.509: INFO: Deleting all statefulset in ns statefulset-3523 Mar 11 23:54:44.511: INFO: Scaling statefulset ss to 0 Mar 11 23:54:44.518: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 23:54:44.520: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:54:44.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3523" for this suite. • [SLOW TEST:364.933 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":71,"skipped":1106,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:54:44.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 23:54:45.073: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 23:54:47.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567685, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567685, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567685, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567685, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 23:54:50.119: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:54:50.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3211" for this suite. STEP: Destroying namespace "webhook-3211-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.793 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":72,"skipped":1110,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:54:50.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 11 23:54:50.484: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7067619-ca71-4a49-98d4-6cc772bb18b5" in namespace "downward-api-6938" to be "success or failure" Mar 11 23:54:50.509: INFO: Pod "downwardapi-volume-d7067619-ca71-4a49-98d4-6cc772bb18b5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.813846ms Mar 11 23:54:52.513: INFO: Pod "downwardapi-volume-d7067619-ca71-4a49-98d4-6cc772bb18b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029299082s STEP: Saw pod success Mar 11 23:54:52.513: INFO: Pod "downwardapi-volume-d7067619-ca71-4a49-98d4-6cc772bb18b5" satisfied condition "success or failure" Mar 11 23:54:52.516: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d7067619-ca71-4a49-98d4-6cc772bb18b5 container client-container: STEP: delete the pod Mar 11 23:54:52.546: INFO: Waiting for pod downwardapi-volume-d7067619-ca71-4a49-98d4-6cc772bb18b5 to disappear Mar 11 23:54:52.551: INFO: Pod downwardapi-volume-d7067619-ca71-4a49-98d4-6cc772bb18b5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:54:52.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6938" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":73,"skipped":1121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:54:52.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Mar 11 23:54:52.600: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:54:57.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3145" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":74,"skipped":1207,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:54:57.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:55:57.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2824" for this suite. • [SLOW TEST:60.098 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":75,"skipped":1208,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:55:57.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 23:55:58.277: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 23:56:00.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567758, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567758, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567758, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719567758, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 23:56:03.323: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:56:03.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9729" for this suite. STEP: Destroying namespace "webhook-9729-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.041 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":76,"skipped":1225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:56:03.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 11 23:56:03.611: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a7969a31-6f35-4522-bf42-52a1934ce7bc" in namespace "security-context-test-8200" to be "success or failure" Mar 11 23:56:03.614: INFO: Pod "busybox-privileged-false-a7969a31-6f35-4522-bf42-52a1934ce7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.567193ms Mar 11 23:56:05.618: INFO: Pod "busybox-privileged-false-a7969a31-6f35-4522-bf42-52a1934ce7bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007586003s Mar 11 23:56:05.618: INFO: Pod "busybox-privileged-false-a7969a31-6f35-4522-bf42-52a1934ce7bc" satisfied condition "success or failure" Mar 11 23:56:05.641: INFO: Got logs for pod "busybox-privileged-false-a7969a31-6f35-4522-bf42-52a1934ce7bc": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:56:05.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8200" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":77,"skipped":1266,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:56:05.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 11 23:56:05.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-089b2529-51ac-4ec1-9346-0710f676ece0" in namespace "projected-3060" to be "success or failure" Mar 11 23:56:05.740: INFO: Pod "downwardapi-volume-089b2529-51ac-4ec1-9346-0710f676ece0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093021ms Mar 11 23:56:07.744: INFO: Pod "downwardapi-volume-089b2529-51ac-4ec1-9346-0710f676ece0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008163453s STEP: Saw pod success Mar 11 23:56:07.744: INFO: Pod "downwardapi-volume-089b2529-51ac-4ec1-9346-0710f676ece0" satisfied condition "success or failure" Mar 11 23:56:07.748: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-089b2529-51ac-4ec1-9346-0710f676ece0 container client-container: STEP: delete the pod Mar 11 23:56:07.813: INFO: Waiting for pod downwardapi-volume-089b2529-51ac-4ec1-9346-0710f676ece0 to disappear Mar 11 23:56:07.816: INFO: Pod downwardapi-volume-089b2529-51ac-4ec1-9346-0710f676ece0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:56:07.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3060" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":78,"skipped":1277,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:56:07.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9664.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9664.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9664.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9664.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9664.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9664.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 23:56:12.066: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:12.070: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:12.073: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:12.075: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:12.084: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:12.087: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:12.089: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:12.092: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:12.116: INFO: Lookups using dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local] Mar 11 23:56:17.121: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:17.124: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:17.128: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:17.131: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:17.140: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:17.143: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:17.146: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:17.149: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:17.155: INFO: Lookups using dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local] Mar 11 23:56:22.120: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:22.123: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:22.125: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:22.128: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:22.135: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:22.137: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:22.140: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:22.142: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:22.147: INFO: Lookups using dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local] Mar 11 23:56:27.121: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:27.124: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:27.128: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:27.131: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:27.139: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:27.141: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:27.143: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:27.145: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:27.149: INFO: Lookups using dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local] Mar 11 23:56:32.121: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:32.124: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:32.127: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:32.130: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:32.138: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:32.140: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:32.142: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:32.145: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:32.150: INFO: Lookups using dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local] Mar 11 23:56:37.121: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:37.125: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:37.128: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:37.131: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:37.139: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:37.141: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:37.143: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:37.146: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local from pod dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7: the server could not find the requested resource (get pods dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7) Mar 11 23:56:37.152: INFO: Lookups using dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9664.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9664.svc.cluster.local jessie_udp@dns-test-service-2.dns-9664.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9664.svc.cluster.local] Mar 11 23:56:42.158: INFO: DNS probes using dns-9664/dns-test-bcce43b6-8d29-405b-baab-e50c5d39bdb7 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 11 23:56:42.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9664" for this suite. • [SLOW TEST:34.546 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":79,"skipped":1284,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 11 23:56:42.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-395 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-395 STEP: Creating statefulset with conflicting port in namespace statefulset-395 STEP: Waiting until pod test-pod will start running in namespace statefulset-395 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-395 Mar 11 23:56:46.489: INFO: Observed stateful pod in namespace: statefulset-395, name: ss-0, uid: 50128c9e-9fea-4501-a2c3-0d85c1fab4ed, status phase: Pending. Waiting for statefulset controller to delete. Mar 12 00:01:46.490: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func10.2.12() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:762 +0x11ba k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002a97e00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc002a97e00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc002a97e00, 0x4c9f938) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 12 00:01:46.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-395' Mar 12 00:01:48.953: INFO: stderr: "" Mar 12 00:01:48.953: INFO: stdout: "Name: ss-0\nNamespace: statefulset-395\nPriority: 0\nNode: latest-worker/\nLabels: baz=blah\n controller-revision-hash=ss-84f8fd7c56\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nIPs: \nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Image: docker.io/library/httpd:2.4.38-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-t7nwl (ro)\nVolumes:\n default-token-t7nwl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-t7nwl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m6s kubelet, latest-worker Predicate PodFitsHostPorts failed\n" Mar 12 00:01:48.953: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-395 Priority: 0 Node: latest-worker/ Labels: baz=blah controller-revision-hash=ss-84f8fd7c56 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: IPs: Controlled By: StatefulSet/ss Containers: webserver: Image: docker.io/library/httpd:2.4.38-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-t7nwl (ro) Volumes: default-token-t7nwl: Type: Secret (a volume populated by a Secret) SecretName: default-token-t7nwl Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning PodFitsHostPorts 5m6s kubelet, latest-worker Predicate PodFitsHostPorts failed Mar 12 00:01:48.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-395 --tail=100' Mar 12 00:01:49.090: INFO: rc: 1 Mar 12 00:01:49.091: INFO: Last 100 log lines of ss-0: Mar 12 00:01:49.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-395' Mar 12 00:01:49.200: INFO: stderr: "" Mar 12 00:01:49.200: INFO: stdout: "Name: test-pod\nNamespace: statefulset-395\nPriority: 0\nNode: latest-worker/172.17.0.16\nStart Time: Wed, 11 Mar 2020 23:56:42 +0000\nLabels: \nAnnotations: \nStatus: Running\nIP: 10.244.1.2\nIPs:\n IP: 10.244.1.2\nContainers:\n webserver:\n Container ID: containerd://aa25458c12579b5256a66d2214c55397d4d1753e58dcbd7fa6b84a8048ce0161\n Image: docker.io/library/httpd:2.4.38-alpine\n Image ID: docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Wed, 11 Mar 2020 23:56:43 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-t7nwl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-t7nwl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-t7nwl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m6s kubelet, latest-worker Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\n Normal Created 5m6s kubelet, latest-worker Created container webserver\n Normal Started 5m6s kubelet, latest-worker Started container webserver\n" Mar 12 00:01:49.200: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-395 Priority: 0 Node: latest-worker/172.17.0.16 Start Time: Wed, 11 Mar 2020 23:56:42 +0000 Labels: Annotations: Status: Running IP: 10.244.1.2 IPs: IP: 10.244.1.2 Containers: webserver: Container ID: containerd://aa25458c12579b5256a66d2214c55397d4d1753e58dcbd7fa6b84a8048ce0161 Image: docker.io/library/httpd:2.4.38-alpine Image ID: docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Wed, 11 Mar 2020 23:56:43 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-t7nwl (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-t7nwl: Type: Secret (a volume populated by a Secret) SecretName: default-token-t7nwl Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m6s kubelet, latest-worker Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine Normal Created 5m6s kubelet, latest-worker Created container webserver Normal Started 5m6s kubelet, latest-worker Started container webserver Mar 12 00:01:49.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-395 --tail=100' Mar 12 00:01:49.280: INFO: stderr: "" Mar 12 00:01:49.280: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.2. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.2. Set the 'ServerName' directive globally to suppress this message\n[Wed Mar 11 23:56:43.707626 2020] [mpm_event:notice] [pid 1:tid 140041415904104] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Mar 11 23:56:43.707677 2020] [core:notice] [pid 1:tid 140041415904104] AH00094: Command line: 'httpd -D FOREGROUND'\n" Mar 12 00:01:49.280: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.2. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.2. Set the 'ServerName' directive globally to suppress this message [Wed Mar 11 23:56:43.707626 2020] [mpm_event:notice] [pid 1:tid 140041415904104] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Wed Mar 11 23:56:43.707677 2020] [core:notice] [pid 1:tid 140041415904104] AH00094: Command line: 'httpd -D FOREGROUND' Mar 12 00:01:49.280: INFO: Deleting all statefulset in ns statefulset-395 Mar 12 00:01:49.282: INFO: Scaling statefulset ss to 0 Mar 12 00:01:59.298: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 00:01:59.301: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "statefulset-395". STEP: Found 9 events. Mar 12 00:01:59.322: INFO: At 2020-03-11 23:56:42 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Mar 12 00:01:59.322: INFO: At 2020-03-11 23:56:42 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-395/ss is recreating failed Pod ss-0 Mar 12 00:01:59.322: INFO: At 2020-03-11 23:56:42 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Mar 12 00:01:59.322: INFO: At 2020-03-11 23:56:42 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 12 00:01:59.322: INFO: At 2020-03-11 23:56:42 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 12 00:01:59.322: INFO: At 2020-03-11 23:56:42 +0000 UTC - event for ss-0: {kubelet latest-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Mar 12 00:01:59.323: INFO: At 2020-03-11 23:56:43 +0000 UTC - event for test-pod: {kubelet latest-worker} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine Mar 12 00:01:59.323: INFO: At 2020-03-11 23:56:43 +0000 UTC - event for test-pod: {kubelet latest-worker} Created: Created container webserver Mar 12 00:01:59.323: INFO: At 2020-03-11 23:56:43 +0000 UTC - event for test-pod: {kubelet latest-worker} Started: Started container webserver Mar 12 00:01:59.325: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 00:01:59.325: INFO: test-pod latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:56:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:56:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 23:56:42 +0000 UTC }] Mar 12 00:01:59.325: INFO: Mar 12 00:01:59.328: INFO: Logging node info for node latest-control-plane Mar 12 00:01:59.331: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane 4f2de9c7-bd2a-4e64-9ef8-bb34a60bd143 932747 0 2020-03-08 14:49:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134929522688 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134929522688 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-11 23:57:13 +0000 UTC,LastTransitionTime:2020-03-08 14:49:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-11 23:57:13 +0000 UTC,LastTransitionTime:2020-03-08 14:49:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-11 23:57:13 +0000 UTC,LastTransitionTime:2020-03-08 14:49:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-11 23:57:13 +0000 UTC,LastTransitionTime:2020-03-08 14:50:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.17,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fb03af8223ea4430b6faaad8b31da5e5,SystemUUID:220fc748-c3b9-4de4-aa76-4a3520169f00,BootID:3de0b5b8-8b8f-48d3-9705-cabccc881bdb,KernelVersion:4.4.0-142-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:144347953,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:132100734,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:111937841,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.11],SizeBytes:36513375,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 12 00:01:59.331: INFO: Logging kubelet events for node latest-control-plane Mar 12 00:01:59.333: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 12 00:01:59.358: INFO: kindnet-gp8bt started at 2020-03-08 14:49:40 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.358: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 00:01:59.358: INFO: kube-proxy-nxxmk started at 2020-03-08 14:49:40 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.358: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 00:01:59.358: INFO: coredns-6955765f44-gxrvh started at 2020-03-08 14:50:19 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.358: INFO: Container coredns ready: true, restart count 0 Mar 12 00:01:59.358: INFO: local-path-provisioner-7745554f7f-52xw4 started at 2020-03-08 14:50:19 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.358: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 12 00:01:59.358: INFO: etcd-latest-control-plane started at 2020-03-08 14:49:26 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.358: INFO: Container etcd ready: true, restart count 0 Mar 12 00:01:59.358: INFO: kube-apiserver-latest-control-plane started at 2020-03-08 14:49:26 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.358: INFO: Container kube-apiserver ready: true, restart count 0 Mar 12 00:01:59.358: INFO: kube-controller-manager-latest-control-plane started at 2020-03-08 14:49:26 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.358: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 12 00:01:59.358: INFO: kube-scheduler-latest-control-plane started at 2020-03-08 14:49:26 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.358: INFO: Container kube-scheduler ready: true, restart count 0 W0312 00:01:59.361901 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 00:01:59.425: INFO: Latency metrics for node latest-control-plane Mar 12 00:01:59.425: INFO: Logging node info for node latest-worker Mar 12 00:01:59.428: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker 9843beed-1f18-4845-a6ab-de938079677a 933073 0 2020-03-08 14:49:42 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134929522688 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134929522688 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-11 23:59:06 +0000 UTC,LastTransitionTime:2020-03-08 14:49:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-11 23:59:06 +0000 UTC,LastTransitionTime:2020-03-08 14:49:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-11 23:59:06 +0000 UTC,LastTransitionTime:2020-03-08 14:49:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-11 23:59:06 +0000 UTC,LastTransitionTime:2020-03-08 14:50:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.16,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1afe602e7cc94c5ebabdc8852fcc6918,SystemUUID:f29a62f6-4d3f-4111-9bfc-fbc0b81ee5c1,BootID:3de0b5b8-8b8f-48d3-9705-cabccc881bdb,KernelVersion:4.4.0-142-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:144347953,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:132100734,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:111937841,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.11],SizeBytes:36513375,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a],SizeBytes:764872,},ContainerImage{Names:[docker.io/library/busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135 docker.io/library/busybox:latest],SizeBytes:764687,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 12 00:01:59.429: INFO: Logging kubelet events for node latest-worker Mar 12 00:01:59.431: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 12 00:01:59.435: INFO: kube-proxy-9jc24 started at 2020-03-08 14:49:42 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.435: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 00:01:59.435: INFO: kindnet-2j5xm started at 2020-03-08 14:49:42 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.435: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 00:01:59.435: INFO: test-pod started at 2020-03-11 23:56:42 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.435: INFO: Container webserver ready: true, restart count 0 W0312 00:01:59.438007 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 00:01:59.505: INFO: Latency metrics for node latest-worker Mar 12 00:01:59.505: INFO: Logging node info for node latest-worker2 Mar 12 00:01:59.508: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 22a41336-f474-44a7-b57e-b8eab5db4091 932711 0 2020-03-08 14:49:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134929522688 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134929522688 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-11 23:56:59 +0000 UTC,LastTransitionTime:2020-03-08 14:49:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-11 23:56:59 +0000 UTC,LastTransitionTime:2020-03-08 14:49:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-11 23:56:59 +0000 UTC,LastTransitionTime:2020-03-08 14:49:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-11 23:56:59 +0000 UTC,LastTransitionTime:2020-03-08 14:50:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.18,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fa6b2451c7614b1f8e7963d8f15176a7,SystemUUID:35f006ce-d321-497c-8063-6f4d5c1e28bd,BootID:3de0b5b8-8b8f-48d3-9705-cabccc881bdb,KernelVersion:4.4.0-142-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:144347953,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:132100734,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:111937841,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.11],SizeBytes:36513375,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 12 00:01:59.508: INFO: Logging kubelet events for node latest-worker2 Mar 12 00:01:59.511: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 12 00:01:59.524: INFO: kube-proxy-cx5xz started at 2020-03-08 14:49:56 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.524: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 00:01:59.524: INFO: kindnet-spz5f started at 2020-03-08 14:49:56 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.524: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 00:01:59.524: INFO: coredns-6955765f44-cgshp started at 2020-03-08 14:50:16 +0000 UTC (0+1 container statuses recorded) Mar 12 00:01:59.524: INFO: Container coredns ready: true, restart count 0 W0312 00:01:59.527434 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 00:01:59.566: INFO: Latency metrics for node latest-worker2 Mar 12 00:01:59.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-395" for this suite. • Failure [317.204 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:01:46.490: Pod ss-0 expected to be re-created at least once /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:762 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":79,"skipped":1292,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:01:59.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Starting the proxy Mar 12 00:01:59.672: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix373350172/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:01:59.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2638" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":280,"completed":80,"skipped":1313,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:01:59.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-cfade615-03be-495d-ab6e-42c4100a8d73 STEP: Creating a pod to test consume configMaps Mar 12 00:01:59.845: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-41498c44-5821-4b9c-9e9c-83283cb3bc38" in namespace "projected-4816" to be "success or failure" Mar 12 00:01:59.852: INFO: Pod "pod-projected-configmaps-41498c44-5821-4b9c-9e9c-83283cb3bc38": Phase="Pending", Reason="", readiness=false. Elapsed: 6.680455ms Mar 12 00:02:01.856: INFO: Pod "pod-projected-configmaps-41498c44-5821-4b9c-9e9c-83283cb3bc38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010956749s Mar 12 00:02:03.884: INFO: Pod "pod-projected-configmaps-41498c44-5821-4b9c-9e9c-83283cb3bc38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03874947s STEP: Saw pod success Mar 12 00:02:03.884: INFO: Pod "pod-projected-configmaps-41498c44-5821-4b9c-9e9c-83283cb3bc38" satisfied condition "success or failure" Mar 12 00:02:03.888: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-41498c44-5821-4b9c-9e9c-83283cb3bc38 container projected-configmap-volume-test: STEP: delete the pod Mar 12 00:02:03.923: INFO: Waiting for pod pod-projected-configmaps-41498c44-5821-4b9c-9e9c-83283cb3bc38 to disappear Mar 12 00:02:03.934: INFO: Pod pod-projected-configmaps-41498c44-5821-4b9c-9e9c-83283cb3bc38 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:02:03.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4816" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":81,"skipped":1349,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:02:03.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 12 00:02:04.023: INFO: Waiting up to 5m0s for pod "pod-fe6528d5-6f38-4cd0-8a32-7bf5e239effa" in namespace "emptydir-2255" to be "success or failure" Mar 12 00:02:04.031: INFO: Pod "pod-fe6528d5-6f38-4cd0-8a32-7bf5e239effa": Phase="Pending", Reason="", readiness=false. Elapsed: 7.804576ms Mar 12 00:02:06.042: INFO: Pod "pod-fe6528d5-6f38-4cd0-8a32-7bf5e239effa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019086305s STEP: Saw pod success Mar 12 00:02:06.042: INFO: Pod "pod-fe6528d5-6f38-4cd0-8a32-7bf5e239effa" satisfied condition "success or failure" Mar 12 00:02:06.047: INFO: Trying to get logs from node latest-worker pod pod-fe6528d5-6f38-4cd0-8a32-7bf5e239effa container test-container: STEP: delete the pod Mar 12 00:02:06.107: INFO: Waiting for pod pod-fe6528d5-6f38-4cd0-8a32-7bf5e239effa to disappear Mar 12 00:02:06.115: INFO: Pod pod-fe6528d5-6f38-4cd0-8a32-7bf5e239effa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:02:06.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2255" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":82,"skipped":1362,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:02:06.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5035 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Mar 12 00:02:06.199: INFO: Found 0 stateful pods, waiting for 3 Mar 12 00:02:16.202: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 00:02:16.202: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 00:02:16.202: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 12 00:02:16.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5035 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 00:02:16.443: INFO: stderr: "I0312 00:02:16.312400 1597 log.go:172] (0xc000a076b0) (0xc000b6a640) Create stream\nI0312 00:02:16.312439 1597 log.go:172] (0xc000a076b0) (0xc000b6a640) Stream added, broadcasting: 1\nI0312 00:02:16.314351 1597 log.go:172] (0xc000a076b0) Reply frame received for 1\nI0312 00:02:16.314379 1597 log.go:172] (0xc000a076b0) (0xc0002df2c0) Create stream\nI0312 00:02:16.314387 1597 log.go:172] (0xc000a076b0) (0xc0002df2c0) Stream added, broadcasting: 3\nI0312 00:02:16.315005 1597 log.go:172] (0xc000a076b0) Reply frame received for 3\nI0312 00:02:16.315028 1597 log.go:172] (0xc000a076b0) (0xc000c04140) Create stream\nI0312 00:02:16.315036 1597 log.go:172] (0xc000a076b0) (0xc000c04140) Stream added, broadcasting: 5\nI0312 00:02:16.315742 1597 log.go:172] (0xc000a076b0) Reply frame received for 5\nI0312 00:02:16.412747 1597 log.go:172] (0xc000a076b0) Data frame received for 5\nI0312 00:02:16.412769 1597 log.go:172] (0xc000c04140) (5) Data frame handling\nI0312 00:02:16.412778 1597 log.go:172] (0xc000c04140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 00:02:16.438207 1597 log.go:172] (0xc000a076b0) Data frame received for 3\nI0312 00:02:16.438229 1597 log.go:172] (0xc0002df2c0) (3) Data frame handling\nI0312 00:02:16.438241 1597 log.go:172] (0xc0002df2c0) (3) Data frame sent\nI0312 00:02:16.438671 1597 log.go:172] (0xc000a076b0) Data frame received for 3\nI0312 00:02:16.438687 1597 log.go:172] (0xc0002df2c0) (3) Data frame handling\nI0312 00:02:16.438706 1597 log.go:172] (0xc000a076b0) Data frame received for 5\nI0312 00:02:16.438727 1597 log.go:172] (0xc000c04140) (5) Data frame handling\nI0312 00:02:16.439539 1597 log.go:172] (0xc000a076b0) Data frame received for 1\nI0312 00:02:16.439550 1597 log.go:172] (0xc000b6a640) (1) Data frame handling\nI0312 00:02:16.439558 1597 log.go:172] (0xc000b6a640) (1) Data frame sent\nI0312 00:02:16.439699 1597 log.go:172] (0xc000a076b0) (0xc000b6a640) Stream removed, broadcasting: 1\nI0312 00:02:16.439718 1597 log.go:172] (0xc000a076b0) Go away received\nI0312 00:02:16.441459 1597 log.go:172] (0xc000a076b0) (0xc000b6a640) Stream removed, broadcasting: 1\nI0312 00:02:16.441473 1597 log.go:172] (0xc000a076b0) (0xc0002df2c0) Stream removed, broadcasting: 3\nI0312 00:02:16.441481 1597 log.go:172] (0xc000a076b0) (0xc000c04140) Stream removed, broadcasting: 5\n" Mar 12 00:02:16.444: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 00:02:16.444: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 12 00:02:26.503: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 12 00:02:36.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5035 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 00:02:36.761: INFO: stderr: "I0312 00:02:36.690227 1616 log.go:172] (0xc00058c840) (0xc000560000) Create stream\nI0312 00:02:36.690272 1616 log.go:172] (0xc00058c840) (0xc000560000) Stream added, broadcasting: 1\nI0312 00:02:36.692602 1616 log.go:172] (0xc00058c840) Reply frame received for 1\nI0312 00:02:36.692648 1616 log.go:172] (0xc00058c840) (0xc000649900) Create stream\nI0312 00:02:36.692664 1616 log.go:172] (0xc00058c840) (0xc000649900) Stream added, broadcasting: 3\nI0312 00:02:36.693566 1616 log.go:172] (0xc00058c840) Reply frame received for 3\nI0312 00:02:36.693594 1616 log.go:172] (0xc00058c840) (0xc000649ae0) Create stream\nI0312 00:02:36.693609 1616 log.go:172] (0xc00058c840) (0xc000649ae0) Stream added, broadcasting: 5\nI0312 00:02:36.694492 1616 log.go:172] (0xc00058c840) Reply frame received for 5\nI0312 00:02:36.756341 1616 log.go:172] (0xc00058c840) Data frame received for 5\nI0312 00:02:36.756367 1616 log.go:172] (0xc000649ae0) (5) Data frame handling\nI0312 00:02:36.756377 1616 log.go:172] (0xc000649ae0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 00:02:36.756399 1616 log.go:172] (0xc00058c840) Data frame received for 3\nI0312 00:02:36.756426 1616 log.go:172] (0xc000649900) (3) Data frame handling\nI0312 00:02:36.756437 1616 log.go:172] (0xc000649900) (3) Data frame sent\nI0312 00:02:36.756452 1616 log.go:172] (0xc00058c840) Data frame received for 5\nI0312 00:02:36.756471 1616 log.go:172] (0xc000649ae0) (5) Data frame handling\nI0312 00:02:36.756487 1616 log.go:172] (0xc00058c840) Data frame received for 3\nI0312 00:02:36.756492 1616 log.go:172] (0xc000649900) (3) Data frame handling\nI0312 00:02:36.757581 1616 log.go:172] (0xc00058c840) Data frame received for 1\nI0312 00:02:36.757607 1616 log.go:172] (0xc000560000) (1) Data frame handling\nI0312 00:02:36.757631 1616 log.go:172] (0xc000560000) (1) Data frame sent\nI0312 00:02:36.757654 1616 log.go:172] (0xc00058c840) (0xc000560000) Stream removed, broadcasting: 1\nI0312 00:02:36.757672 1616 log.go:172] (0xc00058c840) Go away received\nI0312 00:02:36.758059 1616 log.go:172] (0xc00058c840) (0xc000560000) Stream removed, broadcasting: 1\nI0312 00:02:36.758078 1616 log.go:172] (0xc00058c840) (0xc000649900) Stream removed, broadcasting: 3\nI0312 00:02:36.758087 1616 log.go:172] (0xc00058c840) (0xc000649ae0) Stream removed, broadcasting: 5\n" Mar 12 00:02:36.761: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 00:02:36.761: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 00:02:56.786: INFO: Waiting for StatefulSet statefulset-5035/ss2 to complete update Mar 12 00:02:56.786: INFO: Waiting for Pod statefulset-5035/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 12 00:03:06.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5035 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 00:03:07.032: INFO: stderr: "I0312 00:03:06.917499 1638 log.go:172] (0xc000a5b6b0) (0xc0008b46e0) Create stream\nI0312 00:03:06.917543 1638 log.go:172] (0xc000a5b6b0) (0xc0008b46e0) Stream added, broadcasting: 1\nI0312 00:03:06.921587 1638 log.go:172] (0xc000a5b6b0) Reply frame received for 1\nI0312 00:03:06.921640 1638 log.go:172] (0xc000a5b6b0) (0xc00064a6e0) Create stream\nI0312 00:03:06.921662 1638 log.go:172] (0xc000a5b6b0) (0xc00064a6e0) Stream added, broadcasting: 3\nI0312 00:03:06.922887 1638 log.go:172] (0xc000a5b6b0) Reply frame received for 3\nI0312 00:03:06.922919 1638 log.go:172] (0xc000a5b6b0) (0xc000747360) Create stream\nI0312 00:03:06.922932 1638 log.go:172] (0xc000a5b6b0) (0xc000747360) Stream added, broadcasting: 5\nI0312 00:03:06.923900 1638 log.go:172] (0xc000a5b6b0) Reply frame received for 5\nI0312 00:03:06.997636 1638 log.go:172] (0xc000a5b6b0) Data frame received for 5\nI0312 00:03:06.997663 1638 log.go:172] (0xc000747360) (5) Data frame handling\nI0312 00:03:06.997680 1638 log.go:172] (0xc000747360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 00:03:07.027124 1638 log.go:172] (0xc000a5b6b0) Data frame received for 5\nI0312 00:03:07.027168 1638 log.go:172] (0xc000747360) (5) Data frame handling\nI0312 00:03:07.027193 1638 log.go:172] (0xc000a5b6b0) Data frame received for 3\nI0312 00:03:07.027204 1638 log.go:172] (0xc00064a6e0) (3) Data frame handling\nI0312 00:03:07.027212 1638 log.go:172] (0xc00064a6e0) (3) Data frame sent\nI0312 00:03:07.027445 1638 log.go:172] (0xc000a5b6b0) Data frame received for 3\nI0312 00:03:07.027460 1638 log.go:172] (0xc00064a6e0) (3) Data frame handling\nI0312 00:03:07.029299 1638 log.go:172] (0xc000a5b6b0) Data frame received for 1\nI0312 00:03:07.029316 1638 log.go:172] (0xc0008b46e0) (1) Data frame handling\nI0312 00:03:07.029328 1638 log.go:172] (0xc0008b46e0) (1) Data frame sent\nI0312 00:03:07.029341 1638 log.go:172] (0xc000a5b6b0) (0xc0008b46e0) Stream removed, broadcasting: 1\nI0312 00:03:07.029653 1638 log.go:172] (0xc000a5b6b0) (0xc0008b46e0) Stream removed, broadcasting: 1\nI0312 00:03:07.029673 1638 log.go:172] (0xc000a5b6b0) (0xc00064a6e0) Stream removed, broadcasting: 3\nI0312 00:03:07.029855 1638 log.go:172] (0xc000a5b6b0) (0xc000747360) Stream removed, broadcasting: 5\nI0312 00:03:07.029939 1638 log.go:172] (0xc000a5b6b0) Go away received\n" Mar 12 00:03:07.032: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 00:03:07.032: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 00:03:17.073: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 12 00:03:27.112: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5035 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 00:03:27.323: INFO: stderr: "I0312 00:03:27.241663 1658 log.go:172] (0xc0008640b0) (0xc0009b66e0) Create stream\nI0312 00:03:27.241721 1658 log.go:172] (0xc0008640b0) (0xc0009b66e0) Stream added, broadcasting: 1\nI0312 00:03:27.245155 1658 log.go:172] (0xc0008640b0) Reply frame received for 1\nI0312 00:03:27.245188 1658 log.go:172] (0xc0008640b0) (0xc0006c8820) Create stream\nI0312 00:03:27.245204 1658 log.go:172] (0xc0008640b0) (0xc0006c8820) Stream added, broadcasting: 3\nI0312 00:03:27.245814 1658 log.go:172] (0xc0008640b0) Reply frame received for 3\nI0312 00:03:27.245831 1658 log.go:172] (0xc0008640b0) (0xc0004534a0) Create stream\nI0312 00:03:27.245838 1658 log.go:172] (0xc0008640b0) (0xc0004534a0) Stream added, broadcasting: 5\nI0312 00:03:27.246506 1658 log.go:172] (0xc0008640b0) Reply frame received for 5\nI0312 00:03:27.319334 1658 log.go:172] (0xc0008640b0) Data frame received for 5\nI0312 00:03:27.319355 1658 log.go:172] (0xc0004534a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 00:03:27.319375 1658 log.go:172] (0xc0008640b0) Data frame received for 3\nI0312 00:03:27.319423 1658 log.go:172] (0xc0006c8820) (3) Data frame handling\nI0312 00:03:27.319440 1658 log.go:172] (0xc0006c8820) (3) Data frame sent\nI0312 00:03:27.319447 1658 log.go:172] (0xc0008640b0) Data frame received for 3\nI0312 00:03:27.319453 1658 log.go:172] (0xc0006c8820) (3) Data frame handling\nI0312 00:03:27.319476 1658 log.go:172] (0xc0004534a0) (5) Data frame sent\nI0312 00:03:27.319492 1658 log.go:172] (0xc0008640b0) Data frame received for 5\nI0312 00:03:27.319500 1658 log.go:172] (0xc0004534a0) (5) Data frame handling\nI0312 00:03:27.320499 1658 log.go:172] (0xc0008640b0) Data frame received for 1\nI0312 00:03:27.320517 1658 log.go:172] (0xc0009b66e0) (1) Data frame handling\nI0312 00:03:27.320529 1658 log.go:172] (0xc0009b66e0) (1) Data frame sent\nI0312 00:03:27.320541 1658 log.go:172] (0xc0008640b0) (0xc0009b66e0) Stream removed, broadcasting: 1\nI0312 00:03:27.320672 1658 log.go:172] (0xc0008640b0) Go away received\nI0312 00:03:27.320828 1658 log.go:172] (0xc0008640b0) (0xc0009b66e0) Stream removed, broadcasting: 1\nI0312 00:03:27.320844 1658 log.go:172] (0xc0008640b0) (0xc0006c8820) Stream removed, broadcasting: 3\nI0312 00:03:27.320852 1658 log.go:172] (0xc0008640b0) (0xc0004534a0) Stream removed, broadcasting: 5\n" Mar 12 00:03:27.323: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 00:03:27.323: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 00:03:47.339: INFO: Waiting for StatefulSet statefulset-5035/ss2 to complete update Mar 12 00:03:47.339: INFO: Waiting for Pod statefulset-5035/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 12 00:03:57.345: INFO: Deleting all statefulset in ns statefulset-5035 Mar 12 00:03:57.347: INFO: Scaling statefulset ss2 to 0 Mar 12 00:04:17.390: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 00:04:17.406: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:04:17.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5035" for this suite. • [SLOW TEST:131.309 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":83,"skipped":1371,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:04:17.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 12 00:04:17.503: INFO: Waiting up to 5m0s for pod "downward-api-f8328996-4fc0-47d5-a481-3acaf715c56c" in namespace "downward-api-5174" to be "success or failure" Mar 12 00:04:17.533: INFO: Pod "downward-api-f8328996-4fc0-47d5-a481-3acaf715c56c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.77948ms Mar 12 00:04:19.536: INFO: Pod "downward-api-f8328996-4fc0-47d5-a481-3acaf715c56c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033557189s STEP: Saw pod success Mar 12 00:04:19.537: INFO: Pod "downward-api-f8328996-4fc0-47d5-a481-3acaf715c56c" satisfied condition "success or failure" Mar 12 00:04:19.539: INFO: Trying to get logs from node latest-worker pod downward-api-f8328996-4fc0-47d5-a481-3acaf715c56c container dapi-container: STEP: delete the pod Mar 12 00:04:19.603: INFO: Waiting for pod downward-api-f8328996-4fc0-47d5-a481-3acaf715c56c to disappear Mar 12 00:04:19.615: INFO: Pod downward-api-f8328996-4fc0-47d5-a481-3acaf715c56c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:04:19.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5174" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":84,"skipped":1393,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:04:19.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 12 00:04:22.837: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:04:22.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5356" for this suite. •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":85,"skipped":1397,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:04:22.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:04:23.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7560" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":86,"skipped":1411,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:04:23.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 00:04:24.142: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 00:04:26.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719568264, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719568264, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719568264, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719568264, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:04:29.203: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:04:39.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5989" for this suite. STEP: Destroying namespace "webhook-5989-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:16.344 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":87,"skipped":1425,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:04:39.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 12 00:04:41.533: INFO: &Pod{ObjectMeta:{send-events-9e50d60b-75b3-4d29-97d6-5da3577bde39 events-4969 /api/v1/namespaces/events-4969/pods/send-events-9e50d60b-75b3-4d29-97d6-5da3577bde39 860a023d-ac64-423a-9ed9-9d9cc7c7a6e2 934627 0 2020-03-12 00:04:39 +0000 UTC map[name:foo time:504789828] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-djjjv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-djjjv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-djjjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:04:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:04:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:04:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:04:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.14,StartTime:2020-03-12 00:04:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:04:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8f239b1f67e48e8cb6a16fcbd5939546a0a00cce6f2a2df3ad3ae61f9602b06a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 12 00:04:43.537: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 12 00:04:45.541: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:04:45.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4969" for this suite. • [SLOW TEST:6.138 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":280,"completed":88,"skipped":1428,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:04:45.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Mar 12 00:04:45.666: INFO: namespace kubectl-8444 Mar 12 00:04:45.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8444' Mar 12 00:04:46.023: INFO: stderr: "" Mar 12 00:04:46.023: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 00:04:47.027: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 00:04:47.027: INFO: Found 0 / 1 Mar 12 00:04:48.027: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 00:04:48.027: INFO: Found 1 / 1 Mar 12 00:04:48.027: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 12 00:04:48.030: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 00:04:48.030: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 00:04:48.030: INFO: wait on agnhost-master startup in kubectl-8444 Mar 12 00:04:48.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs agnhost-master-hkbc9 agnhost-master --namespace=kubectl-8444' Mar 12 00:04:48.141: INFO: stderr: "" Mar 12 00:04:48.141: INFO: stdout: "Paused\n" STEP: exposing RC Mar 12 00:04:48.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8444' Mar 12 00:04:48.248: INFO: stderr: "" Mar 12 00:04:48.248: INFO: stdout: "service/rm2 exposed\n" Mar 12 00:04:48.257: INFO: Service rm2 in namespace kubectl-8444 found. STEP: exposing service Mar 12 00:04:50.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8444' Mar 12 00:04:50.405: INFO: stderr: "" Mar 12 00:04:50.405: INFO: stdout: "service/rm3 exposed\n" Mar 12 00:04:50.448: INFO: Service rm3 in namespace kubectl-8444 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:04:52.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8444" for this suite. • [SLOW TEST:6.866 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":280,"completed":89,"skipped":1458,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:04:52.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:04:52.510: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 12 00:04:55.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8600 create -f -' Mar 12 00:04:57.474: INFO: stderr: "" Mar 12 00:04:57.474: INFO: stdout: "e2e-test-crd-publish-openapi-4375-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 12 00:04:57.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8600 delete e2e-test-crd-publish-openapi-4375-crds test-foo' Mar 12 00:04:57.559: INFO: stderr: "" Mar 12 00:04:57.559: INFO: stdout: "e2e-test-crd-publish-openapi-4375-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 12 00:04:57.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8600 apply -f -' Mar 12 00:04:57.790: INFO: stderr: "" Mar 12 00:04:57.790: INFO: stdout: "e2e-test-crd-publish-openapi-4375-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 12 00:04:57.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8600 delete e2e-test-crd-publish-openapi-4375-crds test-foo' Mar 12 00:04:57.894: INFO: stderr: "" Mar 12 00:04:57.894: INFO: stdout: "e2e-test-crd-publish-openapi-4375-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 12 00:04:57.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8600 create -f -' Mar 12 00:04:58.089: INFO: rc: 1 Mar 12 00:04:58.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8600 apply -f -' Mar 12 00:04:58.291: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 12 00:04:58.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8600 create -f -' Mar 12 00:04:58.483: INFO: rc: 1 Mar 12 00:04:58.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8600 apply -f -' Mar 12 00:04:58.692: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 12 00:04:58.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4375-crds' Mar 12 00:04:58.919: INFO: stderr: "" Mar 12 00:04:58.919: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4375-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 12 00:04:58.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4375-crds.metadata' Mar 12 00:04:59.153: INFO: stderr: "" Mar 12 00:04:59.153: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4375-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 12 00:04:59.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4375-crds.spec' Mar 12 00:04:59.371: INFO: stderr: "" Mar 12 00:04:59.371: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4375-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 12 00:04:59.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4375-crds.spec.bars' Mar 12 00:04:59.580: INFO: stderr: "" Mar 12 00:04:59.580: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4375-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 12 00:04:59.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4375-crds.spec.bars2' Mar 12 00:04:59.832: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:05:01.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8600" for this suite. • [SLOW TEST:9.233 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":90,"skipped":1472,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:05:01.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override arguments Mar 12 00:05:01.781: INFO: Waiting up to 5m0s for pod "client-containers-f6510564-f22d-474b-a84e-943f55ac4b1e" in namespace "containers-7351" to be "success or failure" Mar 12 00:05:01.805: INFO: Pod "client-containers-f6510564-f22d-474b-a84e-943f55ac4b1e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.494983ms Mar 12 00:05:03.809: INFO: Pod "client-containers-f6510564-f22d-474b-a84e-943f55ac4b1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.028816488s STEP: Saw pod success Mar 12 00:05:03.810: INFO: Pod "client-containers-f6510564-f22d-474b-a84e-943f55ac4b1e" satisfied condition "success or failure" Mar 12 00:05:03.813: INFO: Trying to get logs from node latest-worker pod client-containers-f6510564-f22d-474b-a84e-943f55ac4b1e container test-container: STEP: delete the pod Mar 12 00:05:03.840: INFO: Waiting for pod client-containers-f6510564-f22d-474b-a84e-943f55ac4b1e to disappear Mar 12 00:05:03.842: INFO: Pod client-containers-f6510564-f22d-474b-a84e-943f55ac4b1e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:05:03.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7351" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":91,"skipped":1498,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:05:03.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:05:03.960: INFO: Waiting up to 5m0s for pod "busybox-user-65534-48297b62-8164-4e52-b2b5-f4d78d9a7161" in namespace "security-context-test-4846" to be "success or failure" Mar 12 00:05:03.963: INFO: Pod "busybox-user-65534-48297b62-8164-4e52-b2b5-f4d78d9a7161": Phase="Pending", Reason="", readiness=false. Elapsed: 3.834421ms Mar 12 00:05:05.967: INFO: Pod "busybox-user-65534-48297b62-8164-4e52-b2b5-f4d78d9a7161": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007380215s Mar 12 00:05:05.967: INFO: Pod "busybox-user-65534-48297b62-8164-4e52-b2b5-f4d78d9a7161" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:05:05.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4846" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":92,"skipped":1509,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:05:05.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 00:05:06.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4326' Mar 12 00:05:06.165: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 00:05:06.165: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 12 00:05:06.181: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 12 00:05:06.192: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 12 00:05:06.224: INFO: scanned /root for discovery docs: Mar 12 00:05:06.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4326' Mar 12 00:05:22.116: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 12 00:05:22.116: INFO: stdout: "Created e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864\nScaling up e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 12 00:05:22.116: INFO: stdout: "Created e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864\nScaling up e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 12 00:05:22.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4326' Mar 12 00:05:22.228: INFO: stderr: "" Mar 12 00:05:22.228: INFO: stdout: "e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864-j8nqt e2e-test-httpd-rc-g5pg6 " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Mar 12 00:05:27.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4326' Mar 12 00:05:27.321: INFO: stderr: "" Mar 12 00:05:27.321: INFO: stdout: "e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864-j8nqt " Mar 12 00:05:27.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864-j8nqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4326' Mar 12 00:05:27.397: INFO: stderr: "" Mar 12 00:05:27.397: INFO: stdout: "true" Mar 12 00:05:27.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864-j8nqt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4326' Mar 12 00:05:27.461: INFO: stderr: "" Mar 12 00:05:27.461: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 12 00:05:27.461: INFO: e2e-test-httpd-rc-50850058a145330c2cda3ac6b90fe864-j8nqt is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700 Mar 12 00:05:27.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4326' Mar 12 00:05:27.547: INFO: stderr: "" Mar 12 00:05:27.547: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:05:27.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4326" for this suite. • [SLOW TEST:21.577 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":280,"completed":93,"skipped":1526,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:05:27.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:05:33.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1206" for this suite. • [SLOW TEST:6.082 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":94,"skipped":1544,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:05:33.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:05:33.683: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 12 00:05:36.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9690 create -f -' Mar 12 00:05:38.776: INFO: stderr: "" Mar 12 00:05:38.776: INFO: stdout: "e2e-test-crd-publish-openapi-2057-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 12 00:05:38.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9690 delete e2e-test-crd-publish-openapi-2057-crds test-cr' Mar 12 00:05:38.897: INFO: stderr: "" Mar 12 00:05:38.897: INFO: stdout: "e2e-test-crd-publish-openapi-2057-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 12 00:05:38.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9690 apply -f -' Mar 12 00:05:39.197: INFO: stderr: "" Mar 12 00:05:39.197: INFO: stdout: "e2e-test-crd-publish-openapi-2057-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 12 00:05:39.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9690 delete e2e-test-crd-publish-openapi-2057-crds test-cr' Mar 12 00:05:39.278: INFO: stderr: "" Mar 12 00:05:39.278: INFO: stdout: "e2e-test-crd-publish-openapi-2057-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 12 00:05:39.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2057-crds' Mar 12 00:05:39.484: INFO: stderr: "" Mar 12 00:05:39.484: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2057-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:05:42.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9690" for this suite. • [SLOW TEST:8.707 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":95,"skipped":1544,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:05:42.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-906c1407-4a86-409a-be53-447f40d3433f STEP: Creating secret with name s-test-opt-upd-813b3320-be23-4a65-9325-a57a9dc76580 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-906c1407-4a86-409a-be53-447f40d3433f STEP: Updating secret s-test-opt-upd-813b3320-be23-4a65-9325-a57a9dc76580 STEP: Creating secret with name s-test-opt-create-cfad27fc-4b9a-4dcc-b5e1-32bc0b99deab STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:05:48.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4786" for this suite. • [SLOW TEST:6.208 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":96,"skipped":1550,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:05:48.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:06:14.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6652" for this suite. • [SLOW TEST:25.522 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":97,"skipped":1563,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:06:14.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service multi-endpoint-test in namespace services-6845 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6845 to expose endpoints map[] Mar 12 00:06:14.194: INFO: Get endpoints failed (7.712901ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 12 00:06:15.198: INFO: successfully validated that service multi-endpoint-test in namespace services-6845 exposes endpoints map[] (1.011545882s elapsed) STEP: Creating pod pod1 in namespace services-6845 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6845 to expose endpoints map[pod1:[100]] Mar 12 00:06:17.254: INFO: successfully validated that service multi-endpoint-test in namespace services-6845 exposes endpoints map[pod1:[100]] (2.049269677s elapsed) STEP: Creating pod pod2 in namespace services-6845 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6845 to expose endpoints map[pod1:[100] pod2:[101]] Mar 12 00:06:19.308: INFO: successfully validated that service multi-endpoint-test in namespace services-6845 exposes endpoints map[pod1:[100] pod2:[101]] (2.049069796s elapsed) STEP: Deleting pod pod1 in namespace services-6845 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6845 to expose endpoints map[pod2:[101]] Mar 12 00:06:19.353: INFO: successfully validated that service multi-endpoint-test in namespace services-6845 exposes endpoints map[pod2:[101]] (23.909103ms elapsed) STEP: Deleting pod pod2 in namespace services-6845 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6845 to expose endpoints map[] Mar 12 00:06:19.379: INFO: successfully validated that service multi-endpoint-test in namespace services-6845 exposes endpoints map[] (20.999827ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:06:19.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6845" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:5.408 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":280,"completed":98,"skipped":1568,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:06:19.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:06:19.638: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Pending, waiting for it to be Running (with Ready = true) Mar 12 00:06:21.641: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = false) Mar 12 00:06:23.642: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = false) Mar 12 00:06:25.642: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = false) Mar 12 00:06:27.642: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = false) Mar 12 00:06:29.642: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = false) Mar 12 00:06:31.642: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = false) Mar 12 00:06:33.643: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = false) Mar 12 00:06:35.641: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = false) Mar 12 00:06:37.642: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = false) Mar 12 00:06:39.643: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = false) Mar 12 00:06:41.642: INFO: The status of Pod test-webserver-84fa07f9-80e4-475c-9cf8-f5589a38a093 is Running (Ready = true) Mar 12 00:06:41.644: INFO: Container started at 2020-03-12 00:06:20 +0000 UTC, pod became ready at 2020-03-12 00:06:40 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:06:41.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3772" for this suite. • [SLOW TEST:22.170 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":99,"skipped":1579,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:06:41.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 12 00:06:41.711: INFO: Waiting up to 5m0s for pod "downward-api-b539140c-871b-4a15-887f-6e873579b2f8" in namespace "downward-api-8401" to be "success or failure" Mar 12 00:06:41.732: INFO: Pod "downward-api-b539140c-871b-4a15-887f-6e873579b2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.265275ms Mar 12 00:06:43.736: INFO: Pod "downward-api-b539140c-871b-4a15-887f-6e873579b2f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025228906s STEP: Saw pod success Mar 12 00:06:43.736: INFO: Pod "downward-api-b539140c-871b-4a15-887f-6e873579b2f8" satisfied condition "success or failure" Mar 12 00:06:43.739: INFO: Trying to get logs from node latest-worker pod downward-api-b539140c-871b-4a15-887f-6e873579b2f8 container dapi-container: STEP: delete the pod Mar 12 00:06:43.780: INFO: Waiting for pod downward-api-b539140c-871b-4a15-887f-6e873579b2f8 to disappear Mar 12 00:06:43.787: INFO: Pod downward-api-b539140c-871b-4a15-887f-6e873579b2f8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:06:43.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8401" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":100,"skipped":1599,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:06:43.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-908a5872-99ee-42d6-960f-c6e42fd16b2c STEP: Creating a pod to test consume secrets Mar 12 00:06:43.844: INFO: Waiting up to 5m0s for pod "pod-secrets-3cf33978-66b1-4f9d-aa83-79827c9f635c" in namespace "secrets-3209" to be "success or failure" Mar 12 00:06:43.870: INFO: Pod "pod-secrets-3cf33978-66b1-4f9d-aa83-79827c9f635c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.867787ms Mar 12 00:06:45.872: INFO: Pod "pod-secrets-3cf33978-66b1-4f9d-aa83-79827c9f635c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.028408723s STEP: Saw pod success Mar 12 00:06:45.872: INFO: Pod "pod-secrets-3cf33978-66b1-4f9d-aa83-79827c9f635c" satisfied condition "success or failure" Mar 12 00:06:45.874: INFO: Trying to get logs from node latest-worker pod pod-secrets-3cf33978-66b1-4f9d-aa83-79827c9f635c container secret-volume-test: STEP: delete the pod Mar 12 00:06:45.937: INFO: Waiting for pod pod-secrets-3cf33978-66b1-4f9d-aa83-79827c9f635c to disappear Mar 12 00:06:45.944: INFO: Pod pod-secrets-3cf33978-66b1-4f9d-aa83-79827c9f635c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:06:45.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3209" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":101,"skipped":1605,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:06:45.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 12 00:06:46.006: INFO: Waiting up to 5m0s for pod "pod-4a0dad67-3802-400c-98e4-75643bf119af" in namespace "emptydir-6016" to be "success or failure" Mar 12 00:06:46.016: INFO: Pod "pod-4a0dad67-3802-400c-98e4-75643bf119af": Phase="Pending", Reason="", readiness=false. Elapsed: 9.801832ms Mar 12 00:06:48.020: INFO: Pod "pod-4a0dad67-3802-400c-98e4-75643bf119af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013670345s STEP: Saw pod success Mar 12 00:06:48.020: INFO: Pod "pod-4a0dad67-3802-400c-98e4-75643bf119af" satisfied condition "success or failure" Mar 12 00:06:48.023: INFO: Trying to get logs from node latest-worker pod pod-4a0dad67-3802-400c-98e4-75643bf119af container test-container: STEP: delete the pod Mar 12 00:06:48.048: INFO: Waiting for pod pod-4a0dad67-3802-400c-98e4-75643bf119af to disappear Mar 12 00:06:48.052: INFO: Pod pod-4a0dad67-3802-400c-98e4-75643bf119af no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:06:48.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6016" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":102,"skipped":1616,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:06:48.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 12 00:06:49.242: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ede65e4-8af1-4cfa-b8a5-0160ecfbe7ee" in namespace "downward-api-8277" to be "success or failure" Mar 12 00:06:49.282: INFO: Pod "downwardapi-volume-4ede65e4-8af1-4cfa-b8a5-0160ecfbe7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 39.877448ms Mar 12 00:06:51.285: INFO: Pod "downwardapi-volume-4ede65e4-8af1-4cfa-b8a5-0160ecfbe7ee": Phase="Running", Reason="", readiness=true. Elapsed: 2.042937507s Mar 12 00:06:53.287: INFO: Pod "downwardapi-volume-4ede65e4-8af1-4cfa-b8a5-0160ecfbe7ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045003489s STEP: Saw pod success Mar 12 00:06:53.287: INFO: Pod "downwardapi-volume-4ede65e4-8af1-4cfa-b8a5-0160ecfbe7ee" satisfied condition "success or failure" Mar 12 00:06:53.289: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4ede65e4-8af1-4cfa-b8a5-0160ecfbe7ee container client-container: STEP: delete the pod Mar 12 00:06:53.316: INFO: Waiting for pod downwardapi-volume-4ede65e4-8af1-4cfa-b8a5-0160ecfbe7ee to disappear Mar 12 00:06:53.346: INFO: Pod downwardapi-volume-4ede65e4-8af1-4cfa-b8a5-0160ecfbe7ee no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:06:53.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8277" for this suite. • [SLOW TEST:5.292 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":103,"skipped":1619,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:06:53.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 00:06:53.936: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:06:56.997: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:06:57.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9216" for this suite. STEP: Destroying namespace "webhook-9216-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":104,"skipped":1631,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:06:57.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-54b3d7b0-f83e-4b29-8fc3-c8345ff2a74f STEP: Creating a pod to test consume secrets Mar 12 00:06:57.352: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cacbfa2b-7078-400c-af52-b36262f0e735" in namespace "projected-2470" to be "success or failure" Mar 12 00:06:57.357: INFO: Pod "pod-projected-secrets-cacbfa2b-7078-400c-af52-b36262f0e735": Phase="Pending", Reason="", readiness=false. Elapsed: 4.817975ms Mar 12 00:06:59.372: INFO: Pod "pod-projected-secrets-cacbfa2b-7078-400c-af52-b36262f0e735": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019986517s STEP: Saw pod success Mar 12 00:06:59.372: INFO: Pod "pod-projected-secrets-cacbfa2b-7078-400c-af52-b36262f0e735" satisfied condition "success or failure" Mar 12 00:06:59.374: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-cacbfa2b-7078-400c-af52-b36262f0e735 container projected-secret-volume-test: STEP: delete the pod Mar 12 00:06:59.401: INFO: Waiting for pod pod-projected-secrets-cacbfa2b-7078-400c-af52-b36262f0e735 to disappear Mar 12 00:06:59.417: INFO: Pod pod-projected-secrets-cacbfa2b-7078-400c-af52-b36262f0e735 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:06:59.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2470" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":105,"skipped":1656,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:06:59.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-downwardapi-p7lf STEP: Creating a pod to test atomic-volume-subpath Mar 12 00:06:59.515: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-p7lf" in namespace "subpath-5148" to be "success or failure" Mar 12 00:06:59.554: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Pending", Reason="", readiness=false. Elapsed: 38.864006ms Mar 12 00:07:01.588: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Running", Reason="", readiness=true. Elapsed: 2.073390132s Mar 12 00:07:03.595: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Running", Reason="", readiness=true. Elapsed: 4.079634618s Mar 12 00:07:05.598: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Running", Reason="", readiness=true. Elapsed: 6.082974377s Mar 12 00:07:07.601: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Running", Reason="", readiness=true. Elapsed: 8.085491155s Mar 12 00:07:09.603: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Running", Reason="", readiness=true. Elapsed: 10.088126904s Mar 12 00:07:11.607: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Running", Reason="", readiness=true. Elapsed: 12.09147211s Mar 12 00:07:13.610: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Running", Reason="", readiness=true. Elapsed: 14.094853391s Mar 12 00:07:15.614: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Running", Reason="", readiness=true. Elapsed: 16.098780459s Mar 12 00:07:17.618: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Running", Reason="", readiness=true. Elapsed: 18.102643984s Mar 12 00:07:19.621: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Running", Reason="", readiness=true. Elapsed: 20.106317744s Mar 12 00:07:21.643: INFO: Pod "pod-subpath-test-downwardapi-p7lf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.127575579s STEP: Saw pod success Mar 12 00:07:21.643: INFO: Pod "pod-subpath-test-downwardapi-p7lf" satisfied condition "success or failure" Mar 12 00:07:21.645: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-p7lf container test-container-subpath-downwardapi-p7lf: STEP: delete the pod Mar 12 00:07:21.681: INFO: Waiting for pod pod-subpath-test-downwardapi-p7lf to disappear Mar 12 00:07:21.695: INFO: Pod pod-subpath-test-downwardapi-p7lf no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-p7lf Mar 12 00:07:21.695: INFO: Deleting pod "pod-subpath-test-downwardapi-p7lf" in namespace "subpath-5148" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:07:21.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5148" for this suite. • [SLOW TEST:22.278 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":106,"skipped":1674,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:07:21.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7839 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Mar 12 00:07:21.912: INFO: Found 0 stateful pods, waiting for 3 Mar 12 00:07:31.916: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 00:07:31.916: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 00:07:31.916: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 12 00:07:31.941: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 12 00:07:41.980: INFO: Updating stateful set ss2 Mar 12 00:07:42.011: INFO: Waiting for Pod statefulset-7839/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 12 00:07:52.018: INFO: Waiting for Pod statefulset-7839/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 12 00:08:02.198: INFO: Found 2 stateful pods, waiting for 3 Mar 12 00:08:12.202: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 00:08:12.202: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 00:08:12.202: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 12 00:08:12.224: INFO: Updating stateful set ss2 Mar 12 00:08:12.434: INFO: Waiting for Pod statefulset-7839/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 12 00:08:22.458: INFO: Updating stateful set ss2 Mar 12 00:08:22.470: INFO: Waiting for StatefulSet statefulset-7839/ss2 to complete update Mar 12 00:08:22.470: INFO: Waiting for Pod statefulset-7839/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 12 00:08:32.478: INFO: Deleting all statefulset in ns statefulset-7839 Mar 12 00:08:32.481: INFO: Scaling statefulset ss2 to 0 Mar 12 00:08:52.527: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 00:08:52.529: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:08:52.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7839" for this suite. • [SLOW TEST:90.841 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":107,"skipped":1680,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:08:52.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-38551008-415c-4425-bf72-88bec2645b9d STEP: Creating a pod to test consume configMaps Mar 12 00:08:52.627: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e10a195d-7bbb-45c0-b8ad-1fa9a39ace6e" in namespace "projected-2820" to be "success or failure" Mar 12 00:08:52.668: INFO: Pod "pod-projected-configmaps-e10a195d-7bbb-45c0-b8ad-1fa9a39ace6e": Phase="Pending", Reason="", readiness=false. Elapsed: 40.717933ms Mar 12 00:08:54.672: INFO: Pod "pod-projected-configmaps-e10a195d-7bbb-45c0-b8ad-1fa9a39ace6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.044786553s STEP: Saw pod success Mar 12 00:08:54.672: INFO: Pod "pod-projected-configmaps-e10a195d-7bbb-45c0-b8ad-1fa9a39ace6e" satisfied condition "success or failure" Mar 12 00:08:54.675: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-e10a195d-7bbb-45c0-b8ad-1fa9a39ace6e container projected-configmap-volume-test: STEP: delete the pod Mar 12 00:08:54.760: INFO: Waiting for pod pod-projected-configmaps-e10a195d-7bbb-45c0-b8ad-1fa9a39ace6e to disappear Mar 12 00:08:54.769: INFO: Pod pod-projected-configmaps-e10a195d-7bbb-45c0-b8ad-1fa9a39ace6e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:08:54.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2820" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":108,"skipped":1693,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:08:54.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating api versions Mar 12 00:08:54.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config api-versions' Mar 12 00:08:55.015: INFO: stderr: "" Mar 12 00:08:55.015: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:08:55.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6509" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":280,"completed":109,"skipped":1695,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:08:55.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-2942e689-3119-4ab1-a43f-3fe531bee242 in namespace container-probe-3725 Mar 12 00:08:59.127: INFO: Started pod liveness-2942e689-3119-4ab1-a43f-3fe531bee242 in namespace container-probe-3725 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 00:08:59.130: INFO: Initial restart count of pod liveness-2942e689-3119-4ab1-a43f-3fe531bee242 is 0 Mar 12 00:09:23.182: INFO: Restart count of pod container-probe-3725/liveness-2942e689-3119-4ab1-a43f-3fe531bee242 is now 1 (24.051579402s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:09:23.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3725" for this suite. • [SLOW TEST:28.201 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":110,"skipped":1699,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:09:23.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:09:23.323: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-7fee09d5-e759-4e17-9044-9d48df94549f" in namespace "security-context-test-3289" to be "success or failure" Mar 12 00:09:23.333: INFO: Pod "busybox-readonly-false-7fee09d5-e759-4e17-9044-9d48df94549f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.79688ms Mar 12 00:09:25.336: INFO: Pod "busybox-readonly-false-7fee09d5-e759-4e17-9044-9d48df94549f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012878815s Mar 12 00:09:25.336: INFO: Pod "busybox-readonly-false-7fee09d5-e759-4e17-9044-9d48df94549f" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:09:25.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3289" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":111,"skipped":1700,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:09:25.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 12 00:09:25.397: INFO: Waiting up to 5m0s for pod "pod-5f0371da-8890-4263-9451-d5552799d8a3" in namespace "emptydir-8377" to be "success or failure" Mar 12 00:09:25.401: INFO: Pod "pod-5f0371da-8890-4263-9451-d5552799d8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.760337ms Mar 12 00:09:27.405: INFO: Pod "pod-5f0371da-8890-4263-9451-d5552799d8a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007514256s STEP: Saw pod success Mar 12 00:09:27.405: INFO: Pod "pod-5f0371da-8890-4263-9451-d5552799d8a3" satisfied condition "success or failure" Mar 12 00:09:27.407: INFO: Trying to get logs from node latest-worker pod pod-5f0371da-8890-4263-9451-d5552799d8a3 container test-container: STEP: delete the pod Mar 12 00:09:27.425: INFO: Waiting for pod pod-5f0371da-8890-4263-9451-d5552799d8a3 to disappear Mar 12 00:09:27.429: INFO: Pod pod-5f0371da-8890-4263-9451-d5552799d8a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:09:27.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8377" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":112,"skipped":1704,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:09:27.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:09:27.547: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 12 00:09:27.577: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:27.582: INFO: Number of nodes with available pods: 0 Mar 12 00:09:27.582: INFO: Node latest-worker is running more than one daemon pod Mar 12 00:09:28.586: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:28.594: INFO: Number of nodes with available pods: 0 Mar 12 00:09:28.594: INFO: Node latest-worker is running more than one daemon pod Mar 12 00:09:29.587: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:29.589: INFO: Number of nodes with available pods: 2 Mar 12 00:09:29.589: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 12 00:09:29.620: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:29.620: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:29.626: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:30.630: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:30.630: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:30.633: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:31.630: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:31.630: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:31.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:32.632: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:32.632: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:32.632: INFO: Pod daemon-set-zzcmv is not available Mar 12 00:09:32.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:33.629: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:33.629: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:33.629: INFO: Pod daemon-set-zzcmv is not available Mar 12 00:09:33.631: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:34.629: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:34.629: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:34.629: INFO: Pod daemon-set-zzcmv is not available Mar 12 00:09:34.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:35.631: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:35.631: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:35.631: INFO: Pod daemon-set-zzcmv is not available Mar 12 00:09:35.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:36.630: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:36.630: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:36.630: INFO: Pod daemon-set-zzcmv is not available Mar 12 00:09:36.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:37.631: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:37.631: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:37.631: INFO: Pod daemon-set-zzcmv is not available Mar 12 00:09:37.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:38.630: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:38.630: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:38.630: INFO: Pod daemon-set-zzcmv is not available Mar 12 00:09:38.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:39.630: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:39.630: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:39.630: INFO: Pod daemon-set-zzcmv is not available Mar 12 00:09:39.651: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:40.631: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:40.631: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:40.631: INFO: Pod daemon-set-zzcmv is not available Mar 12 00:09:40.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:41.630: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:41.630: INFO: Wrong image for pod: daemon-set-zzcmv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:41.630: INFO: Pod daemon-set-zzcmv is not available Mar 12 00:09:41.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:42.631: INFO: Pod daemon-set-lfbnh is not available Mar 12 00:09:42.631: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:42.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:43.630: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:43.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:44.652: INFO: Wrong image for pod: daemon-set-s6tr6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 00:09:44.652: INFO: Pod daemon-set-s6tr6 is not available Mar 12 00:09:44.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:45.630: INFO: Pod daemon-set-nd4hm is not available Mar 12 00:09:45.633: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 12 00:09:45.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:45.639: INFO: Number of nodes with available pods: 1 Mar 12 00:09:45.639: INFO: Node latest-worker is running more than one daemon pod Mar 12 00:09:46.644: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:46.647: INFO: Number of nodes with available pods: 1 Mar 12 00:09:46.647: INFO: Node latest-worker is running more than one daemon pod Mar 12 00:09:47.644: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:09:47.647: INFO: Number of nodes with available pods: 2 Mar 12 00:09:47.647: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9384, will wait for the garbage collector to delete the pods Mar 12 00:09:47.719: INFO: Deleting DaemonSet.extensions daemon-set took: 5.684433ms Mar 12 00:09:48.019: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.239742ms Mar 12 00:10:02.522: INFO: Number of nodes with available pods: 0 Mar 12 00:10:02.522: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 00:10:02.525: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9384/daemonsets","resourceVersion":"936790"},"items":null} Mar 12 00:10:02.527: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9384/pods","resourceVersion":"936790"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:10:02.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9384" for this suite. • [SLOW TEST:35.124 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":113,"skipped":1721,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:10:02.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's command Mar 12 00:10:02.611: INFO: Waiting up to 5m0s for pod "var-expansion-a1bfde6c-e594-4040-921b-000f823e35cc" in namespace "var-expansion-1872" to be "success or failure" Mar 12 00:10:02.614: INFO: Pod "var-expansion-a1bfde6c-e594-4040-921b-000f823e35cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222875ms Mar 12 00:10:04.618: INFO: Pod "var-expansion-a1bfde6c-e594-4040-921b-000f823e35cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006366388s STEP: Saw pod success Mar 12 00:10:04.618: INFO: Pod "var-expansion-a1bfde6c-e594-4040-921b-000f823e35cc" satisfied condition "success or failure" Mar 12 00:10:04.621: INFO: Trying to get logs from node latest-worker pod var-expansion-a1bfde6c-e594-4040-921b-000f823e35cc container dapi-container: STEP: delete the pod Mar 12 00:10:04.652: INFO: Waiting for pod var-expansion-a1bfde6c-e594-4040-921b-000f823e35cc to disappear Mar 12 00:10:04.662: INFO: Pod var-expansion-a1bfde6c-e594-4040-921b-000f823e35cc no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:10:04.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1872" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":114,"skipped":1726,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:10:04.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-f19f342d-2b93-441d-960d-dfddca5a0898 in namespace container-probe-4671 Mar 12 00:10:06.744: INFO: Started pod busybox-f19f342d-2b93-441d-960d-dfddca5a0898 in namespace container-probe-4671 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 00:10:06.747: INFO: Initial restart count of pod busybox-f19f342d-2b93-441d-960d-dfddca5a0898 is 0 Mar 12 00:10:54.849: INFO: Restart count of pod container-probe-4671/busybox-f19f342d-2b93-441d-960d-dfddca5a0898 is now 1 (48.102194439s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:10:54.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4671" for this suite. • [SLOW TEST:50.211 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":115,"skipped":1745,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:10:54.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating cluster-info Mar 12 00:10:54.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config cluster-info' Mar 12 00:10:55.039: INFO: stderr: "" Mar 12 00:10:55.039: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32776\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32776/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:10:55.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7951" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":280,"completed":116,"skipped":1794,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:10:55.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's args Mar 12 00:10:55.115: INFO: Waiting up to 5m0s for pod "var-expansion-af66769c-96e6-4573-812e-03486371e297" in namespace "var-expansion-9535" to be "success or failure" Mar 12 00:10:55.131: INFO: Pod "var-expansion-af66769c-96e6-4573-812e-03486371e297": Phase="Pending", Reason="", readiness=false. Elapsed: 15.452974ms Mar 12 00:10:57.134: INFO: Pod "var-expansion-af66769c-96e6-4573-812e-03486371e297": Phase="Running", Reason="", readiness=true. Elapsed: 2.01881803s Mar 12 00:10:59.139: INFO: Pod "var-expansion-af66769c-96e6-4573-812e-03486371e297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023170498s STEP: Saw pod success Mar 12 00:10:59.139: INFO: Pod "var-expansion-af66769c-96e6-4573-812e-03486371e297" satisfied condition "success or failure" Mar 12 00:10:59.142: INFO: Trying to get logs from node latest-worker pod var-expansion-af66769c-96e6-4573-812e-03486371e297 container dapi-container: STEP: delete the pod Mar 12 00:10:59.183: INFO: Waiting for pod var-expansion-af66769c-96e6-4573-812e-03486371e297 to disappear Mar 12 00:10:59.192: INFO: Pod var-expansion-af66769c-96e6-4573-812e-03486371e297 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:10:59.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9535" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":117,"skipped":1796,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:10:59.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-secret-sqnf STEP: Creating a pod to test atomic-volume-subpath Mar 12 00:10:59.364: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-sqnf" in namespace "subpath-2538" to be "success or failure" Mar 12 00:10:59.366: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238314ms Mar 12 00:11:01.370: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005923804s Mar 12 00:11:03.374: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Running", Reason="", readiness=true. Elapsed: 4.010412448s Mar 12 00:11:05.378: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Running", Reason="", readiness=true. Elapsed: 6.013933092s Mar 12 00:11:07.381: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Running", Reason="", readiness=true. Elapsed: 8.017698288s Mar 12 00:11:09.385: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Running", Reason="", readiness=true. Elapsed: 10.021317952s Mar 12 00:11:11.389: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Running", Reason="", readiness=true. Elapsed: 12.024857363s Mar 12 00:11:13.392: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Running", Reason="", readiness=true. Elapsed: 14.028168169s Mar 12 00:11:15.395: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Running", Reason="", readiness=true. Elapsed: 16.031660802s Mar 12 00:11:17.399: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Running", Reason="", readiness=true. Elapsed: 18.035256239s Mar 12 00:11:19.403: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Running", Reason="", readiness=true. Elapsed: 20.03890294s Mar 12 00:11:21.407: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Running", Reason="", readiness=true. Elapsed: 22.042854486s Mar 12 00:11:23.410: INFO: Pod "pod-subpath-test-secret-sqnf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.046334929s STEP: Saw pod success Mar 12 00:11:23.410: INFO: Pod "pod-subpath-test-secret-sqnf" satisfied condition "success or failure" Mar 12 00:11:23.413: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-sqnf container test-container-subpath-secret-sqnf: STEP: delete the pod Mar 12 00:11:23.431: INFO: Waiting for pod pod-subpath-test-secret-sqnf to disappear Mar 12 00:11:23.447: INFO: Pod pod-subpath-test-secret-sqnf no longer exists STEP: Deleting pod pod-subpath-test-secret-sqnf Mar 12 00:11:23.448: INFO: Deleting pod "pod-subpath-test-secret-sqnf" in namespace "subpath-2538" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:11:23.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2538" for this suite. • [SLOW TEST:24.256 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":118,"skipped":1797,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:11:23.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:11:23.526: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-1d13f16a-0106-41ef-9ee8-fe5bf8615c74" in namespace "security-context-test-9662" to be "success or failure" Mar 12 00:11:23.557: INFO: Pod "alpine-nnp-false-1d13f16a-0106-41ef-9ee8-fe5bf8615c74": Phase="Pending", Reason="", readiness=false. Elapsed: 31.188945ms Mar 12 00:11:25.561: INFO: Pod "alpine-nnp-false-1d13f16a-0106-41ef-9ee8-fe5bf8615c74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034913874s Mar 12 00:11:25.561: INFO: Pod "alpine-nnp-false-1d13f16a-0106-41ef-9ee8-fe5bf8615c74" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:11:25.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9662" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":119,"skipped":1807,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:11:25.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-debd3ce2-886f-4951-af3f-47b4b13ed79b in namespace container-probe-7914 Mar 12 00:11:27.703: INFO: Started pod liveness-debd3ce2-886f-4951-af3f-47b4b13ed79b in namespace container-probe-7914 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 00:11:27.706: INFO: Initial restart count of pod liveness-debd3ce2-886f-4951-af3f-47b4b13ed79b is 0 Mar 12 00:11:47.746: INFO: Restart count of pod container-probe-7914/liveness-debd3ce2-886f-4951-af3f-47b4b13ed79b is now 1 (20.03976784s elapsed) Mar 12 00:12:07.783: INFO: Restart count of pod container-probe-7914/liveness-debd3ce2-886f-4951-af3f-47b4b13ed79b is now 2 (40.076976706s elapsed) Mar 12 00:12:27.819: INFO: Restart count of pod container-probe-7914/liveness-debd3ce2-886f-4951-af3f-47b4b13ed79b is now 3 (1m0.113331986s elapsed) Mar 12 00:12:47.868: INFO: Restart count of pod container-probe-7914/liveness-debd3ce2-886f-4951-af3f-47b4b13ed79b is now 4 (1m20.161849225s elapsed) Mar 12 00:13:51.992: INFO: Restart count of pod container-probe-7914/liveness-debd3ce2-886f-4951-af3f-47b4b13ed79b is now 5 (2m24.286406145s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:13:52.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7914" for this suite. • [SLOW TEST:146.458 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":120,"skipped":1831,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:13:52.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-projected-jv4k STEP: Creating a pod to test atomic-volume-subpath Mar 12 00:13:52.184: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jv4k" in namespace "subpath-285" to be "success or failure" Mar 12 00:13:52.190: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142774ms Mar 12 00:13:54.194: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Running", Reason="", readiness=true. Elapsed: 2.009762378s Mar 12 00:13:56.198: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Running", Reason="", readiness=true. Elapsed: 4.01387635s Mar 12 00:13:58.202: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Running", Reason="", readiness=true. Elapsed: 6.017860294s Mar 12 00:14:00.206: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Running", Reason="", readiness=true. Elapsed: 8.021941508s Mar 12 00:14:02.210: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Running", Reason="", readiness=true. Elapsed: 10.02609724s Mar 12 00:14:04.214: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Running", Reason="", readiness=true. Elapsed: 12.03007701s Mar 12 00:14:06.218: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Running", Reason="", readiness=true. Elapsed: 14.034242864s Mar 12 00:14:08.222: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Running", Reason="", readiness=true. Elapsed: 16.038224741s Mar 12 00:14:10.226: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Running", Reason="", readiness=true. Elapsed: 18.041782356s Mar 12 00:14:12.230: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Running", Reason="", readiness=true. Elapsed: 20.045803769s Mar 12 00:14:14.233: INFO: Pod "pod-subpath-test-projected-jv4k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.049410742s STEP: Saw pod success Mar 12 00:14:14.233: INFO: Pod "pod-subpath-test-projected-jv4k" satisfied condition "success or failure" Mar 12 00:14:14.236: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-jv4k container test-container-subpath-projected-jv4k: STEP: delete the pod Mar 12 00:14:14.272: INFO: Waiting for pod pod-subpath-test-projected-jv4k to disappear Mar 12 00:14:14.295: INFO: Pod pod-subpath-test-projected-jv4k no longer exists STEP: Deleting pod pod-subpath-test-projected-jv4k Mar 12 00:14:14.295: INFO: Deleting pod "pod-subpath-test-projected-jv4k" in namespace "subpath-285" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:14:14.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-285" for this suite. • [SLOW TEST:22.273 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":121,"skipped":1850,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:14:14.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod test-webserver-fd17662f-85e8-491e-abff-b21c1cda919f in namespace container-probe-8135 Mar 12 00:14:16.388: INFO: Started pod test-webserver-fd17662f-85e8-491e-abff-b21c1cda919f in namespace container-probe-8135 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 00:14:16.389: INFO: Initial restart count of pod test-webserver-fd17662f-85e8-491e-abff-b21c1cda919f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:18:16.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8135" for this suite. • [SLOW TEST:242.686 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":122,"skipped":1852,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:18:16.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:18:48.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4127" for this suite. STEP: Destroying namespace "nsdeletetest-9223" for this suite. Mar 12 00:18:48.323: INFO: Namespace nsdeletetest-9223 was already deleted STEP: Destroying namespace "nsdeletetest-2508" for this suite. • [SLOW TEST:31.332 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":123,"skipped":1862,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:18:48.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Mar 12 00:18:48.370: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:19:03.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4822" for this suite. • [SLOW TEST:14.685 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":124,"skipped":1877,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:19:03.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Mar 12 00:19:03.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3790' Mar 12 00:19:05.067: INFO: stderr: "" Mar 12 00:19:05.067: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 00:19:06.071: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 00:19:06.071: INFO: Found 0 / 1 Mar 12 00:19:07.071: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 00:19:07.071: INFO: Found 1 / 1 Mar 12 00:19:07.071: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 12 00:19:07.079: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 00:19:07.079: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 00:19:07.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config patch pod agnhost-master-4pkg8 --namespace=kubectl-3790 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 12 00:19:07.194: INFO: stderr: "" Mar 12 00:19:07.194: INFO: stdout: "pod/agnhost-master-4pkg8 patched\n" STEP: checking annotations Mar 12 00:19:07.214: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 00:19:07.215: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:19:07.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3790" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":280,"completed":125,"skipped":1887,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:19:07.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384 STEP: creating the pod Mar 12 00:19:07.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9608' Mar 12 00:19:07.601: INFO: stderr: "" Mar 12 00:19:07.601: INFO: stdout: "pod/pause created\n" Mar 12 00:19:07.601: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 12 00:19:07.601: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9608" to be "running and ready" Mar 12 00:19:07.641: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 39.680022ms Mar 12 00:19:09.644: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.04236232s Mar 12 00:19:09.644: INFO: Pod "pause" satisfied condition "running and ready" Mar 12 00:19:09.644: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: adding the label testing-label with value testing-label-value to a pod Mar 12 00:19:09.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9608' Mar 12 00:19:09.737: INFO: stderr: "" Mar 12 00:19:09.737: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 12 00:19:09.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9608' Mar 12 00:19:09.800: INFO: stderr: "" Mar 12 00:19:09.800: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 12 00:19:09.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9608' Mar 12 00:19:09.865: INFO: stderr: "" Mar 12 00:19:09.865: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 12 00:19:09.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9608' Mar 12 00:19:09.927: INFO: stderr: "" Mar 12 00:19:09.927: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 STEP: using delete to clean up resources Mar 12 00:19:09.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9608' Mar 12 00:19:09.996: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 00:19:09.996: INFO: stdout: "pod \"pause\" force deleted\n" Mar 12 00:19:09.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9608' Mar 12 00:19:10.060: INFO: stderr: "No resources found in kubectl-9608 namespace.\n" Mar 12 00:19:10.060: INFO: stdout: "" Mar 12 00:19:10.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9608 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 00:19:10.117: INFO: stderr: "" Mar 12 00:19:10.118: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:19:10.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9608" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":280,"completed":126,"skipped":1887,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:19:10.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 12 00:19:10.209: INFO: Waiting up to 5m0s for pod "downwardapi-volume-703e0ede-a0db-4e53-b859-02d9b700e887" in namespace "downward-api-5174" to be "success or failure" Mar 12 00:19:10.216: INFO: Pod "downwardapi-volume-703e0ede-a0db-4e53-b859-02d9b700e887": Phase="Pending", Reason="", readiness=false. Elapsed: 6.685244ms Mar 12 00:19:12.219: INFO: Pod "downwardapi-volume-703e0ede-a0db-4e53-b859-02d9b700e887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009649317s STEP: Saw pod success Mar 12 00:19:12.219: INFO: Pod "downwardapi-volume-703e0ede-a0db-4e53-b859-02d9b700e887" satisfied condition "success or failure" Mar 12 00:19:12.221: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-703e0ede-a0db-4e53-b859-02d9b700e887 container client-container: STEP: delete the pod Mar 12 00:19:12.294: INFO: Waiting for pod downwardapi-volume-703e0ede-a0db-4e53-b859-02d9b700e887 to disappear Mar 12 00:19:12.312: INFO: Pod downwardapi-volume-703e0ede-a0db-4e53-b859-02d9b700e887 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:19:12.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5174" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":127,"skipped":1901,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:19:12.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 12 00:19:12.476: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c33cd943-bb57-40ba-bd0c-caeb7b8676a7" in namespace "projected-5508" to be "success or failure" Mar 12 00:19:12.545: INFO: Pod "downwardapi-volume-c33cd943-bb57-40ba-bd0c-caeb7b8676a7": Phase="Pending", Reason="", readiness=false. Elapsed: 69.087763ms Mar 12 00:19:14.549: INFO: Pod "downwardapi-volume-c33cd943-bb57-40ba-bd0c-caeb7b8676a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.072833538s STEP: Saw pod success Mar 12 00:19:14.549: INFO: Pod "downwardapi-volume-c33cd943-bb57-40ba-bd0c-caeb7b8676a7" satisfied condition "success or failure" Mar 12 00:19:14.551: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c33cd943-bb57-40ba-bd0c-caeb7b8676a7 container client-container: STEP: delete the pod Mar 12 00:19:14.582: INFO: Waiting for pod downwardapi-volume-c33cd943-bb57-40ba-bd0c-caeb7b8676a7 to disappear Mar 12 00:19:14.587: INFO: Pod downwardapi-volume-c33cd943-bb57-40ba-bd0c-caeb7b8676a7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:19:14.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5508" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":128,"skipped":1930,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:19:14.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:19:14.642: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:19:15.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2152" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":280,"completed":129,"skipped":1931,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:19:15.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 00:19:17.539: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:19:17.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2836" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":130,"skipped":1946,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:19:17.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-5834 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 00:19:17.759: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 12 00:19:17.851: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 12 00:19:19.853: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 12 00:19:21.854: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:19:23.854: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:19:25.863: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:19:27.855: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:19:29.857: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:19:31.855: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:19:33.855: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:19:35.854: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:19:37.860: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 12 00:19:37.866: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 12 00:19:39.904: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:8080/dial?request=hostname&protocol=http&host=10.244.1.62&port=8080&tries=1'] Namespace:pod-network-test-5834 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:19:39.904: INFO: >>> kubeConfig: /root/.kube/config I0312 00:19:39.937517 7 log.go:172] (0xc004d18840) (0xc002734a00) Create stream I0312 00:19:39.937561 7 log.go:172] (0xc004d18840) (0xc002734a00) Stream added, broadcasting: 1 I0312 00:19:39.946522 7 log.go:172] (0xc004d18840) Reply frame received for 1 I0312 00:19:39.946605 7 log.go:172] (0xc004d18840) (0xc00233e500) Create stream I0312 00:19:39.946645 7 log.go:172] (0xc004d18840) (0xc00233e500) Stream added, broadcasting: 3 I0312 00:19:39.947858 7 log.go:172] (0xc004d18840) Reply frame received for 3 I0312 00:19:39.947909 7 log.go:172] (0xc004d18840) (0xc002d01ea0) Create stream I0312 00:19:39.947932 7 log.go:172] (0xc004d18840) (0xc002d01ea0) Stream added, broadcasting: 5 I0312 00:19:39.949211 7 log.go:172] (0xc004d18840) Reply frame received for 5 I0312 00:19:40.007041 7 log.go:172] (0xc004d18840) Data frame received for 3 I0312 00:19:40.007076 7 log.go:172] (0xc00233e500) (3) Data frame handling I0312 00:19:40.007099 7 log.go:172] (0xc00233e500) (3) Data frame sent I0312 00:19:40.007330 7 log.go:172] (0xc004d18840) Data frame received for 3 I0312 00:19:40.007388 7 log.go:172] (0xc00233e500) (3) Data frame handling I0312 00:19:40.007878 7 log.go:172] (0xc004d18840) Data frame received for 5 I0312 00:19:40.007889 7 log.go:172] (0xc002d01ea0) (5) Data frame handling I0312 00:19:40.008919 7 log.go:172] (0xc004d18840) Data frame received for 1 I0312 00:19:40.008950 7 log.go:172] (0xc002734a00) (1) Data frame handling I0312 00:19:40.008959 7 log.go:172] (0xc002734a00) (1) Data frame sent I0312 00:19:40.008971 7 log.go:172] (0xc004d18840) (0xc002734a00) Stream removed, broadcasting: 1 I0312 00:19:40.008982 7 log.go:172] (0xc004d18840) Go away received I0312 00:19:40.009115 7 log.go:172] (0xc004d18840) (0xc002734a00) Stream removed, broadcasting: 1 I0312 00:19:40.009135 7 log.go:172] (0xc004d18840) (0xc00233e500) Stream removed, broadcasting: 3 I0312 00:19:40.009147 7 log.go:172] (0xc004d18840) (0xc002d01ea0) Stream removed, broadcasting: 5 Mar 12 00:19:40.009: INFO: Waiting for responses: map[] Mar 12 00:19:40.036: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:8080/dial?request=hostname&protocol=http&host=10.244.2.238&port=8080&tries=1'] Namespace:pod-network-test-5834 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:19:40.037: INFO: >>> kubeConfig: /root/.kube/config I0312 00:19:40.068728 7 log.go:172] (0xc0051962c0) (0xc00233ef00) Create stream I0312 00:19:40.068753 7 log.go:172] (0xc0051962c0) (0xc00233ef00) Stream added, broadcasting: 1 I0312 00:19:40.071393 7 log.go:172] (0xc0051962c0) Reply frame received for 1 I0312 00:19:40.071444 7 log.go:172] (0xc0051962c0) (0xc001f64460) Create stream I0312 00:19:40.071462 7 log.go:172] (0xc0051962c0) (0xc001f64460) Stream added, broadcasting: 3 I0312 00:19:40.072408 7 log.go:172] (0xc0051962c0) Reply frame received for 3 I0312 00:19:40.072435 7 log.go:172] (0xc0051962c0) (0xc002734aa0) Create stream I0312 00:19:40.072445 7 log.go:172] (0xc0051962c0) (0xc002734aa0) Stream added, broadcasting: 5 I0312 00:19:40.073452 7 log.go:172] (0xc0051962c0) Reply frame received for 5 I0312 00:19:40.136859 7 log.go:172] (0xc0051962c0) Data frame received for 3 I0312 00:19:40.136887 7 log.go:172] (0xc001f64460) (3) Data frame handling I0312 00:19:40.136901 7 log.go:172] (0xc001f64460) (3) Data frame sent I0312 00:19:40.137218 7 log.go:172] (0xc0051962c0) Data frame received for 3 I0312 00:19:40.137236 7 log.go:172] (0xc001f64460) (3) Data frame handling I0312 00:19:40.137354 7 log.go:172] (0xc0051962c0) Data frame received for 5 I0312 00:19:40.137374 7 log.go:172] (0xc002734aa0) (5) Data frame handling I0312 00:19:40.138653 7 log.go:172] (0xc0051962c0) Data frame received for 1 I0312 00:19:40.138670 7 log.go:172] (0xc00233ef00) (1) Data frame handling I0312 00:19:40.138702 7 log.go:172] (0xc00233ef00) (1) Data frame sent I0312 00:19:40.138828 7 log.go:172] (0xc0051962c0) (0xc00233ef00) Stream removed, broadcasting: 1 I0312 00:19:40.138851 7 log.go:172] (0xc0051962c0) Go away received I0312 00:19:40.139001 7 log.go:172] (0xc0051962c0) (0xc00233ef00) Stream removed, broadcasting: 1 I0312 00:19:40.139030 7 log.go:172] (0xc0051962c0) (0xc001f64460) Stream removed, broadcasting: 3 I0312 00:19:40.139048 7 log.go:172] (0xc0051962c0) (0xc002734aa0) Stream removed, broadcasting: 5 Mar 12 00:19:40.139: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:19:40.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5834" for this suite. • [SLOW TEST:22.525 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":131,"skipped":1962,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:19:40.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-4wkz STEP: Creating a pod to test atomic-volume-subpath Mar 12 00:19:40.224: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4wkz" in namespace "subpath-7999" to be "success or failure" Mar 12 00:19:40.228: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242909ms Mar 12 00:19:42.232: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Running", Reason="", readiness=true. Elapsed: 2.007531438s Mar 12 00:19:44.240: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Running", Reason="", readiness=true. Elapsed: 4.015880796s Mar 12 00:19:46.292: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Running", Reason="", readiness=true. Elapsed: 6.06756535s Mar 12 00:19:48.294: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Running", Reason="", readiness=true. Elapsed: 8.070348526s Mar 12 00:19:50.298: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Running", Reason="", readiness=true. Elapsed: 10.073694644s Mar 12 00:19:52.301: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Running", Reason="", readiness=true. Elapsed: 12.076608637s Mar 12 00:19:54.312: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Running", Reason="", readiness=true. Elapsed: 14.087728666s Mar 12 00:19:56.315: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Running", Reason="", readiness=true. Elapsed: 16.090875642s Mar 12 00:19:58.318: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Running", Reason="", readiness=true. Elapsed: 18.093924733s Mar 12 00:20:00.354: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Running", Reason="", readiness=true. Elapsed: 20.129973319s Mar 12 00:20:02.357: INFO: Pod "pod-subpath-test-configmap-4wkz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.132861716s STEP: Saw pod success Mar 12 00:20:02.357: INFO: Pod "pod-subpath-test-configmap-4wkz" satisfied condition "success or failure" Mar 12 00:20:02.359: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-4wkz container test-container-subpath-configmap-4wkz: STEP: delete the pod Mar 12 00:20:02.396: INFO: Waiting for pod pod-subpath-test-configmap-4wkz to disappear Mar 12 00:20:02.403: INFO: Pod pod-subpath-test-configmap-4wkz no longer exists STEP: Deleting pod pod-subpath-test-configmap-4wkz Mar 12 00:20:02.403: INFO: Deleting pod "pod-subpath-test-configmap-4wkz" in namespace "subpath-7999" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:20:02.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7999" for this suite. • [SLOW TEST:22.249 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":132,"skipped":1968,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:20:02.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating replication controller my-hostname-basic-d9681e30-bcc8-4ebf-8262-b8b2e989e7dd Mar 12 00:20:02.505: INFO: Pod name my-hostname-basic-d9681e30-bcc8-4ebf-8262-b8b2e989e7dd: Found 0 pods out of 1 Mar 12 00:20:07.522: INFO: Pod name my-hostname-basic-d9681e30-bcc8-4ebf-8262-b8b2e989e7dd: Found 1 pods out of 1 Mar 12 00:20:07.522: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d9681e30-bcc8-4ebf-8262-b8b2e989e7dd" are running Mar 12 00:20:07.524: INFO: Pod "my-hostname-basic-d9681e30-bcc8-4ebf-8262-b8b2e989e7dd-c4vds" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 00:20:02 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 00:20:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 00:20:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 00:20:02 +0000 UTC Reason: Message:}]) Mar 12 00:20:07.524: INFO: Trying to dial the pod Mar 12 00:20:12.534: INFO: Controller my-hostname-basic-d9681e30-bcc8-4ebf-8262-b8b2e989e7dd: Got expected result from replica 1 [my-hostname-basic-d9681e30-bcc8-4ebf-8262-b8b2e989e7dd-c4vds]: "my-hostname-basic-d9681e30-bcc8-4ebf-8262-b8b2e989e7dd-c4vds", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:20:12.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4055" for this suite. • [SLOW TEST:10.131 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":280,"completed":133,"skipped":1993,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:20:12.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:20:12.596: INFO: Creating deployment "webserver-deployment" Mar 12 00:20:12.601: INFO: Waiting for observed generation 1 Mar 12 00:20:14.708: INFO: Waiting for all required pods to come up Mar 12 00:20:14.712: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 12 00:20:16.725: INFO: Waiting for deployment "webserver-deployment" to complete Mar 12 00:20:16.733: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 12 00:20:16.739: INFO: Updating deployment webserver-deployment Mar 12 00:20:16.739: INFO: Waiting for observed generation 2 Mar 12 00:20:18.764: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 12 00:20:18.767: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 12 00:20:18.771: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 12 00:20:18.779: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 12 00:20:18.779: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 12 00:20:18.781: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 12 00:20:18.785: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 12 00:20:18.785: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 12 00:20:18.790: INFO: Updating deployment webserver-deployment Mar 12 00:20:18.790: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 12 00:20:18.806: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 12 00:20:18.881: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 12 00:20:19.020: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5600 /apis/apps/v1/namespaces/deployment-5600/deployments/webserver-deployment e4b11adf-aaca-4a95-bd37-271293b7f390 939392 3 2020-03-12 00:20:12 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038f2918 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-12 00:20:17 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-12 00:20:18 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 12 00:20:19.051: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5600 /apis/apps/v1/namespaces/deployment-5600/replicasets/webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 939439 3 2020-03-12 00:20:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e4b11adf-aaca-4a95-bd37-271293b7f390 0xc0038b2707 0xc0038b2708}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038b2778 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 00:20:19.051: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 12 00:20:19.051: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5600 /apis/apps/v1/namespaces/deployment-5600/replicasets/webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 939431 3 2020-03-12 00:20:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e4b11adf-aaca-4a95-bd37-271293b7f390 0xc0038b2647 0xc0038b2648}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038b26a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 12 00:20:19.166: INFO: Pod "webserver-deployment-595b5b9587-7vq5b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7vq5b webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-7vq5b 1ad1669b-7506-4177-9df4-2783cfba5ca0 939395 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381a3c7 0xc00381a3c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.167: INFO: Pod "webserver-deployment-595b5b9587-8c55l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8c55l webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-8c55l c05dd4e5-da3c-4f18-9bc3-fd89f5a84fae 939429 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381a4e0 0xc00381a4e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.167: INFO: Pod "webserver-deployment-595b5b9587-9g5jz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9g5jz webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-9g5jz 7fd47a0b-8864-49c2-8ebc-681e90a002f1 939408 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381a5f0 0xc00381a5f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.167: INFO: Pod "webserver-deployment-595b5b9587-cnwl6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cnwl6 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-cnwl6 d0d83a69-b673-461f-82fc-cfc596ecb7cd 939279 0 2020-03-12 00:20:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381a700 0xc00381a701}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.242,StartTime:2020-03-12 00:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:20:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6c4435fe95431d1554898f1c2aadc73821ad4ee5a2f16dd878455ef2cd6c87a6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.167: INFO: Pod "webserver-deployment-595b5b9587-cxk5q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cxk5q webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-cxk5q dec81046-f2b0-4dac-94ac-2db226db91cd 939426 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381a887 0xc00381a888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.167: INFO: Pod "webserver-deployment-595b5b9587-fs229" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fs229 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-fs229 99af6050-9029-41ff-9b43-29b00ea5be90 939247 0 2020-03-12 00:20:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381a9a0 0xc00381a9a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.240,StartTime:2020-03-12 00:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:20:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://52ffd300f1a97009f8d4a65cc18a918ed57a7c708ed27cc91031c24945e26e61,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.168: INFO: Pod "webserver-deployment-595b5b9587-gvwkh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gvwkh webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-gvwkh 924ac2b6-852d-43d4-971b-beb619e0fcff 939436 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381ab17 0xc00381ab18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-12 00:20:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.168: INFO: Pod "webserver-deployment-595b5b9587-ns5cs" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ns5cs webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-ns5cs 6c316026-b416-4163-8976-dec3737f9d43 939253 0 2020-03-12 00:20:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381ac77 0xc00381ac78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.241,StartTime:2020-03-12 00:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:20:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b02d9281e6b3a3f83b71bdfc008a408d435b67008fbb14f916f92805cbc9f835,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.168: INFO: Pod "webserver-deployment-595b5b9587-p9tqb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-p9tqb webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-p9tqb 83fec77b-2858-44bd-8d6d-80add5199232 939257 0 2020-03-12 00:20:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381adf7 0xc00381adf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.65,StartTime:2020-03-12 00:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:20:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a25719ed78fe6f4e7aa7b7110c75d2157feecda3dc8e77ac986421154fa87ad5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.168: INFO: Pod "webserver-deployment-595b5b9587-pvvb6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pvvb6 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-pvvb6 33fd0dd3-86f4-4327-8ee0-e2f964f3dbdd 939427 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381af77 0xc00381af78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.168: INFO: Pod "webserver-deployment-595b5b9587-qr24k" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qr24k webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-qr24k e8e17ede-5b83-4bc3-9939-7b9e2846a64f 939414 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381b0b0 0xc00381b0b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.169: INFO: Pod "webserver-deployment-595b5b9587-s9dl5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s9dl5 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-s9dl5 9fb3b206-5177-4ba2-9cc1-5f44a0abff3d 939428 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381b1c0 0xc00381b1c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.169: INFO: Pod "webserver-deployment-595b5b9587-sxrzv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sxrzv webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-sxrzv 114b99c9-b1f3-4e4d-9bad-00609f09ba14 939285 0 2020-03-12 00:20:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381b2e0 0xc00381b2e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.68,StartTime:2020-03-12 00:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:20:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b95a36ca00643d8c8d34c3774bd4efcf6ed09ce7ae95d7b33a21c26beb79ccdc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.169: INFO: Pod "webserver-deployment-595b5b9587-t94n9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t94n9 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-t94n9 eb2c364c-89d3-4ec1-ab41-44218d474022 939425 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381b467 0xc00381b468}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.170: INFO: Pod "webserver-deployment-595b5b9587-trkkg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-trkkg webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-trkkg b4b93aae-a094-461c-90bc-d3eab1778985 939262 0 2020-03-12 00:20:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381b580 0xc00381b581}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.67,StartTime:2020-03-12 00:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:20:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://13eaa52ea0847e78ba7fdc545b22de2fa341e5b4c7021b4162441d923cdd8109,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.170: INFO: Pod "webserver-deployment-595b5b9587-txsjr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-txsjr webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-txsjr 342ffeb4-9715-4006-9e0c-016fe4edcdde 939265 0 2020-03-12 00:20:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381b6f7 0xc00381b6f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.66,StartTime:2020-03-12 00:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:20:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://43aa13db90b3f1e5c3214d507b16f75ed84e4508913f35c5d947fa86778dea69,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.170: INFO: Pod "webserver-deployment-595b5b9587-vmb6w" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vmb6w webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-vmb6w cad6b9ba-bd7c-4eb8-91a6-5dc71bf13aab 939272 0 2020-03-12 00:20:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381b887 0xc00381b888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.243,StartTime:2020-03-12 00:20:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:20:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fe4def09b9373ddf5adf7ab6dc8c47030c68b6f36c8643195d5dde8ee8ff3880,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.171: INFO: Pod "webserver-deployment-595b5b9587-whmq7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-whmq7 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-whmq7 f584f6b9-ed39-47ef-bb63-35dacfd76361 939407 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381ba07 0xc00381ba08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-12 00:20:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.171: INFO: Pod "webserver-deployment-595b5b9587-xc8q6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xc8q6 webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-xc8q6 4df07040-7fa4-451c-a2bf-d6487f33fc8a 939416 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381bbc7 0xc00381bbc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.171: INFO: Pod "webserver-deployment-595b5b9587-z9qdz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z9qdz webserver-deployment-595b5b9587- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-595b5b9587-z9qdz 5b8ababd-9936-4f6a-8ed5-eaefeef795c9 939418 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4a73c5f6-9a06-41de-8175-7df85885c4ac 0xc00381bce0 0xc00381bce1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.171: INFO: Pod "webserver-deployment-c7997dcc8-6w8zz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6w8zz webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-6w8zz 16e811e1-788a-4d44-881b-f232d3aa462b 939430 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc00381be00 0xc00381be01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.171: INFO: Pod "webserver-deployment-c7997dcc8-7nlkb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7nlkb webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-7nlkb d5365a3e-151b-4cea-b98a-71d7d680ca8d 939434 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc00381bf20 0xc00381bf21}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.171: INFO: Pod "webserver-deployment-c7997dcc8-89nsq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-89nsq webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-89nsq 77b210be-5250-4cf9-9403-72e128fcf7b1 939313 0 2020-03-12 00:20:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a60030 0xc002a60031}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-12 00:20:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.172: INFO: Pod "webserver-deployment-c7997dcc8-89r9q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-89r9q webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-89r9q 9b98c467-93e7-46a5-820d-da03df453d02 939420 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a601e0 0xc002a601e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.172: INFO: Pod "webserver-deployment-c7997dcc8-9czhj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9czhj webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-9czhj 66a4003b-26a8-4334-9632-595a4ab91dbc 939424 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a60310 0xc002a60311}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.172: INFO: Pod "webserver-deployment-c7997dcc8-b66b2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b66b2 webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-b66b2 aa736635-d08b-44ae-91be-d961d8c49eb3 939438 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a60430 0xc002a60431}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-12 00:20:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.172: INFO: Pod "webserver-deployment-c7997dcc8-fzvf7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fzvf7 webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-fzvf7 c7a1548d-b2ef-401c-a2dd-4fde31e34ed4 939404 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a605a0 0xc002a605a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.172: INFO: Pod "webserver-deployment-c7997dcc8-hh9pc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hh9pc webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-hh9pc 450e5f6b-b060-4d7d-bc78-16f1b1f275eb 939339 0 2020-03-12 00:20:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a606c0 0xc002a606c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-12 00:20:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.173: INFO: Pod "webserver-deployment-c7997dcc8-l668s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l668s webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-l668s 77abbf15-a504-4ac6-b480-8af2ef828033 939325 0 2020-03-12 00:20:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a60830 0xc002a60831}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-12 00:20:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.173: INFO: Pod "webserver-deployment-c7997dcc8-mrwsh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mrwsh webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-mrwsh 024ed572-2b18-449d-8a5a-3f5aa6d1da1b 939316 0 2020-03-12 00:20:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a609a0 0xc002a609a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-12 00:20:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.173: INFO: Pod "webserver-deployment-c7997dcc8-nfnd5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nfnd5 webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-nfnd5 81b98334-d448-47f8-8503-6707ab6376f9 939421 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a60b10 0xc002a60b11}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.173: INFO: Pod "webserver-deployment-c7997dcc8-tbq2s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tbq2s webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-tbq2s f9546271-745a-447c-9104-475349b13a6e 939402 0 2020-03-12 00:20:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a60c30 0xc002a60c31}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:20:19.173: INFO: Pod "webserver-deployment-c7997dcc8-wt2l6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wt2l6 webserver-deployment-c7997dcc8- deployment-5600 /api/v1/namespaces/deployment-5600/pods/webserver-deployment-c7997dcc8-wt2l6 27101f5a-5bb8-4738-a842-d98b4056849e 939344 0 2020-03-12 00:20:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6f10079f-e375-46de-a98f-d878ac1e241f 0xc002a60d50 0xc002a60d51}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k9rh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k9rh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:20:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-12 00:20:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:20:19.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5600" for this suite. • [SLOW TEST:6.933 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":134,"skipped":2000,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:20:19.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9384 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9384 STEP: creating replication controller externalsvc in namespace services-9384 I0312 00:20:20.304070 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9384, replica count: 2 I0312 00:20:23.354385 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0312 00:20:26.354543 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 12 00:20:26.530: INFO: Creating new exec pod Mar 12 00:20:32.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-9384 execpod9g66w -- /bin/sh -x -c nslookup clusterip-service' Mar 12 00:20:32.811: INFO: stderr: "I0312 00:20:32.736387 2555 log.go:172] (0xc000a2a000) (0xc0006926e0) Create stream\nI0312 00:20:32.736435 2555 log.go:172] (0xc000a2a000) (0xc0006926e0) Stream added, broadcasting: 1\nI0312 00:20:32.739221 2555 log.go:172] (0xc000a2a000) Reply frame received for 1\nI0312 00:20:32.739256 2555 log.go:172] (0xc000a2a000) (0xc0006cf360) Create stream\nI0312 00:20:32.739266 2555 log.go:172] (0xc000a2a000) (0xc0006cf360) Stream added, broadcasting: 3\nI0312 00:20:32.740173 2555 log.go:172] (0xc000a2a000) Reply frame received for 3\nI0312 00:20:32.740203 2555 log.go:172] (0xc000a2a000) (0xc000b58000) Create stream\nI0312 00:20:32.740215 2555 log.go:172] (0xc000a2a000) (0xc000b58000) Stream added, broadcasting: 5\nI0312 00:20:32.741033 2555 log.go:172] (0xc000a2a000) Reply frame received for 5\nI0312 00:20:32.796463 2555 log.go:172] (0xc000a2a000) Data frame received for 5\nI0312 00:20:32.796485 2555 log.go:172] (0xc000b58000) (5) Data frame handling\nI0312 00:20:32.796497 2555 log.go:172] (0xc000b58000) (5) Data frame sent\n+ nslookup clusterip-service\nI0312 00:20:32.803798 2555 log.go:172] (0xc000a2a000) Data frame received for 3\nI0312 00:20:32.803822 2555 log.go:172] (0xc0006cf360) (3) Data frame handling\nI0312 00:20:32.803837 2555 log.go:172] (0xc0006cf360) (3) Data frame sent\nI0312 00:20:32.805138 2555 log.go:172] (0xc000a2a000) Data frame received for 3\nI0312 00:20:32.805159 2555 log.go:172] (0xc0006cf360) (3) Data frame handling\nI0312 00:20:32.805174 2555 log.go:172] (0xc0006cf360) (3) Data frame sent\nI0312 00:20:32.805492 2555 log.go:172] (0xc000a2a000) Data frame received for 5\nI0312 00:20:32.805560 2555 log.go:172] (0xc000b58000) (5) Data frame handling\nI0312 00:20:32.805627 2555 log.go:172] (0xc000a2a000) Data frame received for 3\nI0312 00:20:32.805653 2555 log.go:172] (0xc0006cf360) (3) Data frame handling\nI0312 00:20:32.807318 2555 log.go:172] (0xc000a2a000) Data frame received for 1\nI0312 00:20:32.807347 2555 log.go:172] (0xc0006926e0) (1) Data frame handling\nI0312 00:20:32.807364 2555 log.go:172] (0xc0006926e0) (1) Data frame sent\nI0312 00:20:32.807385 2555 log.go:172] (0xc000a2a000) (0xc0006926e0) Stream removed, broadcasting: 1\nI0312 00:20:32.807738 2555 log.go:172] (0xc000a2a000) (0xc0006926e0) Stream removed, broadcasting: 1\nI0312 00:20:32.807759 2555 log.go:172] (0xc000a2a000) (0xc0006cf360) Stream removed, broadcasting: 3\nI0312 00:20:32.807769 2555 log.go:172] (0xc000a2a000) (0xc000b58000) Stream removed, broadcasting: 5\n" Mar 12 00:20:32.811: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9384.svc.cluster.local\tcanonical name = externalsvc.services-9384.svc.cluster.local.\nName:\texternalsvc.services-9384.svc.cluster.local\nAddress: 10.96.212.181\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9384, will wait for the garbage collector to delete the pods Mar 12 00:20:32.885: INFO: Deleting ReplicationController externalsvc took: 8.021005ms Mar 12 00:20:33.186: INFO: Terminating ReplicationController externalsvc pods took: 300.242997ms Mar 12 00:20:42.507: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:20:42.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9384" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.107 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":135,"skipped":2045,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:20:42.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Mar 12 00:20:42.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8524' Mar 12 00:20:42.901: INFO: stderr: "" Mar 12 00:20:42.901: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 00:20:42.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8524' Mar 12 00:20:42.979: INFO: stderr: "" Mar 12 00:20:42.979: INFO: stdout: "update-demo-nautilus-c555b update-demo-nautilus-rgg69 " Mar 12 00:20:42.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c555b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8524' Mar 12 00:20:43.040: INFO: stderr: "" Mar 12 00:20:43.040: INFO: stdout: "" Mar 12 00:20:43.040: INFO: update-demo-nautilus-c555b is created but not running Mar 12 00:20:48.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8524' Mar 12 00:20:48.120: INFO: stderr: "" Mar 12 00:20:48.120: INFO: stdout: "update-demo-nautilus-c555b update-demo-nautilus-rgg69 " Mar 12 00:20:48.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c555b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8524' Mar 12 00:20:48.181: INFO: stderr: "" Mar 12 00:20:48.181: INFO: stdout: "true" Mar 12 00:20:48.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c555b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8524' Mar 12 00:20:48.241: INFO: stderr: "" Mar 12 00:20:48.241: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 00:20:48.241: INFO: validating pod update-demo-nautilus-c555b Mar 12 00:20:48.244: INFO: got data: { "image": "nautilus.jpg" } Mar 12 00:20:48.244: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 00:20:48.244: INFO: update-demo-nautilus-c555b is verified up and running Mar 12 00:20:48.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rgg69 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8524' Mar 12 00:20:48.314: INFO: stderr: "" Mar 12 00:20:48.314: INFO: stdout: "true" Mar 12 00:20:48.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rgg69 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8524' Mar 12 00:20:48.379: INFO: stderr: "" Mar 12 00:20:48.380: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 00:20:48.380: INFO: validating pod update-demo-nautilus-rgg69 Mar 12 00:20:48.382: INFO: got data: { "image": "nautilus.jpg" } Mar 12 00:20:48.382: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 00:20:48.382: INFO: update-demo-nautilus-rgg69 is verified up and running STEP: using delete to clean up resources Mar 12 00:20:48.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8524' Mar 12 00:20:48.450: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 00:20:48.450: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 12 00:20:48.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8524' Mar 12 00:20:48.523: INFO: stderr: "No resources found in kubectl-8524 namespace.\n" Mar 12 00:20:48.523: INFO: stdout: "" Mar 12 00:20:48.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8524 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 00:20:48.589: INFO: stderr: "" Mar 12 00:20:48.589: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:20:48.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8524" for this suite. • [SLOW TEST:6.037 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":280,"completed":136,"skipped":2071,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:20:48.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-482476b3-ca17-4f1a-8a3d-5c7d36bd2e66 STEP: Creating a pod to test consume secrets Mar 12 00:20:48.700: INFO: Waiting up to 5m0s for pod "pod-secrets-e4867686-54e0-41cb-9751-05e4ca86732c" in namespace "secrets-5294" to be "success or failure" Mar 12 00:20:48.705: INFO: Pod "pod-secrets-e4867686-54e0-41cb-9751-05e4ca86732c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.895175ms Mar 12 00:20:50.709: INFO: Pod "pod-secrets-e4867686-54e0-41cb-9751-05e4ca86732c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009454393s Mar 12 00:20:52.713: INFO: Pod "pod-secrets-e4867686-54e0-41cb-9751-05e4ca86732c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013294552s STEP: Saw pod success Mar 12 00:20:52.713: INFO: Pod "pod-secrets-e4867686-54e0-41cb-9751-05e4ca86732c" satisfied condition "success or failure" Mar 12 00:20:52.716: INFO: Trying to get logs from node latest-worker pod pod-secrets-e4867686-54e0-41cb-9751-05e4ca86732c container secret-volume-test: STEP: delete the pod Mar 12 00:20:52.770: INFO: Waiting for pod pod-secrets-e4867686-54e0-41cb-9751-05e4ca86732c to disappear Mar 12 00:20:52.777: INFO: Pod pod-secrets-e4867686-54e0-41cb-9751-05e4ca86732c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:20:52.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5294" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":137,"skipped":2077,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:20:52.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:20:52.867: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 5.738202ms) Mar 12 00:20:52.888: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 21.04686ms) Mar 12 00:20:52.902: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 13.385588ms) Mar 12 00:20:52.905: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.299534ms) Mar 12 00:20:52.908: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.677259ms) Mar 12 00:20:52.911: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.742809ms) Mar 12 00:20:52.913: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.378588ms) Mar 12 00:20:52.916: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.492829ms) Mar 12 00:20:52.918: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.401322ms) Mar 12 00:20:52.920: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.298908ms) Mar 12 00:20:52.923: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.694188ms) Mar 12 00:20:52.925: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.315376ms) Mar 12 00:20:52.928: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.461052ms) Mar 12 00:20:52.930: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.312914ms) Mar 12 00:20:52.932: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.088822ms) Mar 12 00:20:52.935: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.218778ms) Mar 12 00:20:52.937: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 1.931589ms) Mar 12 00:20:52.939: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.038983ms) Mar 12 00:20:52.940: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 1.800217ms) Mar 12 00:20:52.942: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 1.975041ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:20:52.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7387" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":280,"completed":138,"skipped":2085,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:20:52.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Mar 12 00:20:53.024: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:21:07.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6091" for this suite. • [SLOW TEST:14.345 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":139,"skipped":2115,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:21:07.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:21:23.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3376" for this suite. • [SLOW TEST:16.144 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":140,"skipped":2121,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:21:23.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-ca32de0f-c088-4d04-99c4-7ddbd709fe16 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:21:25.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8696" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":141,"skipped":2121,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:21:25.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2368.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 210.142.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.142.210_udp@PTR;check="$$(dig +tcp +noall +answer +search 210.142.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.142.210_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2368.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 210.142.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.142.210_udp@PTR;check="$$(dig +tcp +noall +answer +search 210.142.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.142.210_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 00:21:29.865: INFO: Unable to read wheezy_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:29.867: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:29.870: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:29.872: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:29.891: INFO: Unable to read jessie_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:29.893: INFO: Unable to read jessie_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:29.896: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:29.898: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:29.911: INFO: Lookups using dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd failed for: [wheezy_udp@dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_udp@dns-test-service.dns-2368.svc.cluster.local jessie_tcp@dns-test-service.dns-2368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local] Mar 12 00:21:34.916: INFO: Unable to read wheezy_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:34.919: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:34.926: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:34.930: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:34.958: INFO: Unable to read jessie_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:34.960: INFO: Unable to read jessie_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:34.963: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:34.966: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:35.010: INFO: Lookups using dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd failed for: [wheezy_udp@dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_udp@dns-test-service.dns-2368.svc.cluster.local jessie_tcp@dns-test-service.dns-2368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local] Mar 12 00:21:39.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:39.940: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:39.943: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:39.946: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:39.966: INFO: Unable to read jessie_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:39.970: INFO: Unable to read jessie_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:39.973: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:39.991: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:40.007: INFO: Lookups using dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd failed for: [wheezy_udp@dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_udp@dns-test-service.dns-2368.svc.cluster.local jessie_tcp@dns-test-service.dns-2368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local] Mar 12 00:21:44.915: INFO: Unable to read wheezy_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:44.918: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:44.920: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:44.922: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:44.939: INFO: Unable to read jessie_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:44.941: INFO: Unable to read jessie_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:44.943: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:44.945: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:44.965: INFO: Lookups using dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd failed for: [wheezy_udp@dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_udp@dns-test-service.dns-2368.svc.cluster.local jessie_tcp@dns-test-service.dns-2368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local] Mar 12 00:21:49.916: INFO: Unable to read wheezy_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:49.919: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:49.922: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:49.925: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:49.947: INFO: Unable to read jessie_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:49.949: INFO: Unable to read jessie_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:49.952: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:49.954: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:49.969: INFO: Lookups using dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd failed for: [wheezy_udp@dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_udp@dns-test-service.dns-2368.svc.cluster.local jessie_tcp@dns-test-service.dns-2368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local] Mar 12 00:21:54.915: INFO: Unable to read wheezy_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:54.918: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:54.920: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:54.923: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:54.939: INFO: Unable to read jessie_udp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:54.941: INFO: Unable to read jessie_tcp@dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:54.943: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:54.945: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local from pod dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd: the server could not find the requested resource (get pods dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd) Mar 12 00:21:54.960: INFO: Lookups using dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd failed for: [wheezy_udp@dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@dns-test-service.dns-2368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_udp@dns-test-service.dns-2368.svc.cluster.local jessie_tcp@dns-test-service.dns-2368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2368.svc.cluster.local] Mar 12 00:21:59.962: INFO: DNS probes using dns-2368/dns-test-9a226046-4ff7-4927-b9d8-b1f6fda74bdd succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:00.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2368" for this suite. • [SLOW TEST:34.680 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":280,"completed":142,"skipped":2131,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:00.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9719.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9719.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 00:22:04.501: INFO: DNS probes using dns-9719/dns-test-aa88aa3b-59f6-415a-9e5c-ea2338de75cb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:04.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9719" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":280,"completed":143,"skipped":2134,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:04.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 12 00:22:05.529: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 12 00:22:07.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569325, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569325, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569325, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569325, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:22:10.572: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:22:10.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:11.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2075" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.248 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":144,"skipped":2140,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:11.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 12 00:22:11.854: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27dd9232-0a95-4ca5-9b14-ad5814cd65b1" in namespace "downward-api-3360" to be "success or failure" Mar 12 00:22:11.858: INFO: Pod "downwardapi-volume-27dd9232-0a95-4ca5-9b14-ad5814cd65b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.971975ms Mar 12 00:22:13.863: INFO: Pod "downwardapi-volume-27dd9232-0a95-4ca5-9b14-ad5814cd65b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008476126s Mar 12 00:22:15.867: INFO: Pod "downwardapi-volume-27dd9232-0a95-4ca5-9b14-ad5814cd65b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012511333s STEP: Saw pod success Mar 12 00:22:15.867: INFO: Pod "downwardapi-volume-27dd9232-0a95-4ca5-9b14-ad5814cd65b1" satisfied condition "success or failure" Mar 12 00:22:15.870: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-27dd9232-0a95-4ca5-9b14-ad5814cd65b1 container client-container: STEP: delete the pod Mar 12 00:22:15.905: INFO: Waiting for pod downwardapi-volume-27dd9232-0a95-4ca5-9b14-ad5814cd65b1 to disappear Mar 12 00:22:15.919: INFO: Pod downwardapi-volume-27dd9232-0a95-4ca5-9b14-ad5814cd65b1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:15.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3360" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":145,"skipped":2142,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:15.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override all Mar 12 00:22:16.054: INFO: Waiting up to 5m0s for pod "client-containers-4377ce48-b61a-4975-bf5d-d1d7fe413e2e" in namespace "containers-2243" to be "success or failure" Mar 12 00:22:16.058: INFO: Pod "client-containers-4377ce48-b61a-4975-bf5d-d1d7fe413e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.828712ms Mar 12 00:22:18.062: INFO: Pod "client-containers-4377ce48-b61a-4975-bf5d-d1d7fe413e2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008709353s STEP: Saw pod success Mar 12 00:22:18.062: INFO: Pod "client-containers-4377ce48-b61a-4975-bf5d-d1d7fe413e2e" satisfied condition "success or failure" Mar 12 00:22:18.064: INFO: Trying to get logs from node latest-worker pod client-containers-4377ce48-b61a-4975-bf5d-d1d7fe413e2e container test-container: STEP: delete the pod Mar 12 00:22:18.094: INFO: Waiting for pod client-containers-4377ce48-b61a-4975-bf5d-d1d7fe413e2e to disappear Mar 12 00:22:18.104: INFO: Pod client-containers-4377ce48-b61a-4975-bf5d-d1d7fe413e2e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:18.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2243" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":146,"skipped":2145,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:18.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:22:18.168: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.36518ms) Mar 12 00:22:18.195: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 26.154373ms) Mar 12 00:22:18.198: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.46984ms) Mar 12 00:22:18.201: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.648197ms) Mar 12 00:22:18.204: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.97917ms) Mar 12 00:22:18.207: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.657359ms) Mar 12 00:22:18.209: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.387903ms) Mar 12 00:22:18.211: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.389622ms) Mar 12 00:22:18.214: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.586952ms) Mar 12 00:22:18.217: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.650439ms) Mar 12 00:22:18.219: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.439786ms) Mar 12 00:22:18.222: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.572769ms) Mar 12 00:22:18.224: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.246164ms) Mar 12 00:22:18.226: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.40379ms) Mar 12 00:22:18.229: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.597012ms) Mar 12 00:22:18.232: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.680428ms) Mar 12 00:22:18.234: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.361274ms) Mar 12 00:22:18.236: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.269597ms) Mar 12 00:22:18.239: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.340411ms) Mar 12 00:22:18.241: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.244193ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:18.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7498" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":280,"completed":147,"skipped":2193,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:18.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 00:22:18.637: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 00:22:20.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569338, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569338, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569338, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569338, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:22:23.710: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:35.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6030" for this suite. STEP: Destroying namespace "webhook-6030-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:17.809 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":148,"skipped":2238,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:36.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name projected-secret-test-546204b1-aa8a-4979-9792-4f3eea6c0ae1 STEP: Creating a pod to test consume secrets Mar 12 00:22:36.127: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f0cac09f-f1d6-4bbc-ada8-23a031dd747d" in namespace "projected-375" to be "success or failure" Mar 12 00:22:36.129: INFO: Pod "pod-projected-secrets-f0cac09f-f1d6-4bbc-ada8-23a031dd747d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.830429ms Mar 12 00:22:38.131: INFO: Pod "pod-projected-secrets-f0cac09f-f1d6-4bbc-ada8-23a031dd747d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004159713s STEP: Saw pod success Mar 12 00:22:38.131: INFO: Pod "pod-projected-secrets-f0cac09f-f1d6-4bbc-ada8-23a031dd747d" satisfied condition "success or failure" Mar 12 00:22:38.133: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-f0cac09f-f1d6-4bbc-ada8-23a031dd747d container secret-volume-test: STEP: delete the pod Mar 12 00:22:38.171: INFO: Waiting for pod pod-projected-secrets-f0cac09f-f1d6-4bbc-ada8-23a031dd747d to disappear Mar 12 00:22:38.190: INFO: Pod pod-projected-secrets-f0cac09f-f1d6-4bbc-ada8-23a031dd747d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:38.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-375" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":149,"skipped":2262,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:38.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-9759/secret-test-9334c116-27bc-4691-8faa-a32e8421615d STEP: Creating a pod to test consume secrets Mar 12 00:22:38.261: INFO: Waiting up to 5m0s for pod "pod-configmaps-69094ef7-eef7-4f26-9fa0-1f92e4f54457" in namespace "secrets-9759" to be "success or failure" Mar 12 00:22:38.266: INFO: Pod "pod-configmaps-69094ef7-eef7-4f26-9fa0-1f92e4f54457": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276584ms Mar 12 00:22:40.441: INFO: Pod "pod-configmaps-69094ef7-eef7-4f26-9fa0-1f92e4f54457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179171084s STEP: Saw pod success Mar 12 00:22:40.441: INFO: Pod "pod-configmaps-69094ef7-eef7-4f26-9fa0-1f92e4f54457" satisfied condition "success or failure" Mar 12 00:22:40.443: INFO: Trying to get logs from node latest-worker pod pod-configmaps-69094ef7-eef7-4f26-9fa0-1f92e4f54457 container env-test: STEP: delete the pod Mar 12 00:22:40.466: INFO: Waiting for pod pod-configmaps-69094ef7-eef7-4f26-9fa0-1f92e4f54457 to disappear Mar 12 00:22:40.469: INFO: Pod pod-configmaps-69094ef7-eef7-4f26-9fa0-1f92e4f54457 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:40.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9759" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":150,"skipped":2269,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:40.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:22:40.535: INFO: Creating ReplicaSet my-hostname-basic-2085a179-db0e-45f3-8922-86831e8c8679 Mar 12 00:22:40.584: INFO: Pod name my-hostname-basic-2085a179-db0e-45f3-8922-86831e8c8679: Found 0 pods out of 1 Mar 12 00:22:45.608: INFO: Pod name my-hostname-basic-2085a179-db0e-45f3-8922-86831e8c8679: Found 1 pods out of 1 Mar 12 00:22:45.608: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2085a179-db0e-45f3-8922-86831e8c8679" is running Mar 12 00:22:45.610: INFO: Pod "my-hostname-basic-2085a179-db0e-45f3-8922-86831e8c8679-rqr7f" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 00:22:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 00:22:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 00:22:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 00:22:40 +0000 UTC Reason: Message:}]) Mar 12 00:22:45.610: INFO: Trying to dial the pod Mar 12 00:22:50.619: INFO: Controller my-hostname-basic-2085a179-db0e-45f3-8922-86831e8c8679: Got expected result from replica 1 [my-hostname-basic-2085a179-db0e-45f3-8922-86831e8c8679-rqr7f]: "my-hostname-basic-2085a179-db0e-45f3-8922-86831e8c8679-rqr7f", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:50.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1075" for this suite. • [SLOW TEST:10.148 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":280,"completed":151,"skipped":2269,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:50.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 12 00:22:51.429: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 12 00:22:53.437: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569371, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569371, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:22:56.491: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:22:56.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:57.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9217" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.176 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":152,"skipped":2275,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:57.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:22:59.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4380" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":153,"skipped":2335,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:22:59.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-707711df-07f3-4dfe-ab35-9f9e39dcb92b STEP: Creating a pod to test consume configMaps Mar 12 00:23:00.000: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-52a39182-f7c5-4f3c-92e4-fa8cbd02a173" in namespace "projected-4807" to be "success or failure" Mar 12 00:23:00.019: INFO: Pod "pod-projected-configmaps-52a39182-f7c5-4f3c-92e4-fa8cbd02a173": Phase="Pending", Reason="", readiness=false. Elapsed: 19.519458ms Mar 12 00:23:02.023: INFO: Pod "pod-projected-configmaps-52a39182-f7c5-4f3c-92e4-fa8cbd02a173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023095925s STEP: Saw pod success Mar 12 00:23:02.023: INFO: Pod "pod-projected-configmaps-52a39182-f7c5-4f3c-92e4-fa8cbd02a173" satisfied condition "success or failure" Mar 12 00:23:02.025: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-52a39182-f7c5-4f3c-92e4-fa8cbd02a173 container projected-configmap-volume-test: STEP: delete the pod Mar 12 00:23:02.088: INFO: Waiting for pod pod-projected-configmaps-52a39182-f7c5-4f3c-92e4-fa8cbd02a173 to disappear Mar 12 00:23:02.091: INFO: Pod pod-projected-configmaps-52a39182-f7c5-4f3c-92e4-fa8cbd02a173 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:23:02.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4807" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":154,"skipped":2341,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:23:02.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 12 00:23:02.240: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 12 00:23:13.127: INFO: >>> kubeConfig: /root/.kube/config Mar 12 00:23:15.980: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:23:27.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4800" for this suite. • [SLOW TEST:24.978 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":155,"skipped":2341,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:23:27.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 00:23:27.592: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 00:23:29.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569407, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569407, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569407, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569407, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:23:32.616: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 12 00:23:32.636: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:23:32.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-408" for this suite. STEP: Destroying namespace "webhook-408-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.704 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":156,"skipped":2344,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:23:32.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 00:23:34.869: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:23:34.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8700" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":157,"skipped":2350,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:23:34.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-540d4443-a8fb-47b2-8ee7-760ede2f812d STEP: Creating a pod to test consume secrets Mar 12 00:23:34.995: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8516410e-45b0-49a9-9b26-1ce566a1ddb9" in namespace "projected-7186" to be "success or failure" Mar 12 00:23:34.999: INFO: Pod "pod-projected-secrets-8516410e-45b0-49a9-9b26-1ce566a1ddb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.526031ms Mar 12 00:23:37.003: INFO: Pod "pod-projected-secrets-8516410e-45b0-49a9-9b26-1ce566a1ddb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008234623s STEP: Saw pod success Mar 12 00:23:37.003: INFO: Pod "pod-projected-secrets-8516410e-45b0-49a9-9b26-1ce566a1ddb9" satisfied condition "success or failure" Mar 12 00:23:37.005: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-8516410e-45b0-49a9-9b26-1ce566a1ddb9 container projected-secret-volume-test: STEP: delete the pod Mar 12 00:23:37.024: INFO: Waiting for pod pod-projected-secrets-8516410e-45b0-49a9-9b26-1ce566a1ddb9 to disappear Mar 12 00:23:37.029: INFO: Pod pod-projected-secrets-8516410e-45b0-49a9-9b26-1ce566a1ddb9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:23:37.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7186" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":158,"skipped":2350,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:23:37.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 12 00:23:41.173: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 00:23:41.179: INFO: Pod pod-with-prestop-http-hook still exists Mar 12 00:23:43.179: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 00:23:43.183: INFO: Pod pod-with-prestop-http-hook still exists Mar 12 00:23:45.179: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 00:23:45.183: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:23:45.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2653" for this suite. • [SLOW TEST:8.161 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":159,"skipped":2364,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:23:45.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 00:23:47.369: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:23:47.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6804" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":160,"skipped":2372,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:23:47.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0312 00:24:27.505048 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 00:24:27.505: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:24:27.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8532" for this suite. • [SLOW TEST:40.117 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":161,"skipped":2377,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:24:27.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8343, will wait for the garbage collector to delete the pods Mar 12 00:24:31.720: INFO: Deleting Job.batch foo took: 16.23942ms Mar 12 00:24:32.020: INFO: Terminating Job.batch foo pods took: 300.216323ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:12.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8343" for this suite. • [SLOW TEST:45.034 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":162,"skipped":2384,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:12.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 12 00:25:12.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e917bc0-fd0f-4ecd-8eca-8887761af5c3" in namespace "downward-api-8281" to be "success or failure" Mar 12 00:25:12.607: INFO: Pod "downwardapi-volume-7e917bc0-fd0f-4ecd-8eca-8887761af5c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341116ms Mar 12 00:25:14.611: INFO: Pod "downwardapi-volume-7e917bc0-fd0f-4ecd-8eca-8887761af5c3": Phase="Running", Reason="", readiness=true. Elapsed: 2.008478531s Mar 12 00:25:16.614: INFO: Pod "downwardapi-volume-7e917bc0-fd0f-4ecd-8eca-8887761af5c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01159723s STEP: Saw pod success Mar 12 00:25:16.614: INFO: Pod "downwardapi-volume-7e917bc0-fd0f-4ecd-8eca-8887761af5c3" satisfied condition "success or failure" Mar 12 00:25:16.615: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7e917bc0-fd0f-4ecd-8eca-8887761af5c3 container client-container: STEP: delete the pod Mar 12 00:25:16.638: INFO: Waiting for pod downwardapi-volume-7e917bc0-fd0f-4ecd-8eca-8887761af5c3 to disappear Mar 12 00:25:16.642: INFO: Pod downwardapi-volume-7e917bc0-fd0f-4ecd-8eca-8887761af5c3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:16.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8281" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":163,"skipped":2391,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:16.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-c67224ba-3b50-4040-8062-24dd324fea73 STEP: Creating a pod to test consume secrets Mar 12 00:25:16.743: INFO: Waiting up to 5m0s for pod "pod-secrets-67920b50-a80c-4025-ab9c-92c9259f7e6c" in namespace "secrets-1388" to be "success or failure" Mar 12 00:25:16.751: INFO: Pod "pod-secrets-67920b50-a80c-4025-ab9c-92c9259f7e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.894537ms Mar 12 00:25:18.754: INFO: Pod "pod-secrets-67920b50-a80c-4025-ab9c-92c9259f7e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011328141s Mar 12 00:25:20.757: INFO: Pod "pod-secrets-67920b50-a80c-4025-ab9c-92c9259f7e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014574115s STEP: Saw pod success Mar 12 00:25:20.757: INFO: Pod "pod-secrets-67920b50-a80c-4025-ab9c-92c9259f7e6c" satisfied condition "success or failure" Mar 12 00:25:20.759: INFO: Trying to get logs from node latest-worker pod pod-secrets-67920b50-a80c-4025-ab9c-92c9259f7e6c container secret-volume-test: STEP: delete the pod Mar 12 00:25:20.783: INFO: Waiting for pod pod-secrets-67920b50-a80c-4025-ab9c-92c9259f7e6c to disappear Mar 12 00:25:20.796: INFO: Pod pod-secrets-67920b50-a80c-4025-ab9c-92c9259f7e6c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:20.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1388" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":164,"skipped":2425,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:20.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 12 00:25:23.403: INFO: Successfully updated pod "pod-update-7b065d33-0577-4ea1-999d-163f900e1e86" STEP: verifying the updated pod is in kubernetes Mar 12 00:25:23.426: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:23.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8144" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":165,"skipped":2449,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:23.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 12 00:25:23.521: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 00:25:23.529: INFO: Waiting for terminating namespaces to be deleted... Mar 12 00:25:23.532: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 12 00:25:23.536: INFO: pod-update-7b065d33-0577-4ea1-999d-163f900e1e86 from pods-8144 started at 2020-03-12 00:25:20 +0000 UTC (1 container statuses recorded) Mar 12 00:25:23.536: INFO: Container nginx ready: true, restart count 0 Mar 12 00:25:23.536: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 12 00:25:23.536: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 00:25:23.536: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 12 00:25:23.536: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 00:25:23.536: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 12 00:25:23.549: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 12 00:25:23.549: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 00:25:23.549: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 12 00:25:23.549: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 00:25:23.549: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 12 00:25:23.549: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 12 00:25:23.632: INFO: Pod coredns-6955765f44-cgshp requesting resource cpu=100m on Node latest-worker2 Mar 12 00:25:23.632: INFO: Pod kindnet-2j5xm requesting resource cpu=100m on Node latest-worker Mar 12 00:25:23.632: INFO: Pod kindnet-spz5f requesting resource cpu=100m on Node latest-worker2 Mar 12 00:25:23.632: INFO: Pod kube-proxy-9jc24 requesting resource cpu=0m on Node latest-worker Mar 12 00:25:23.632: INFO: Pod kube-proxy-cx5xz requesting resource cpu=0m on Node latest-worker2 Mar 12 00:25:23.632: INFO: Pod pod-update-7b065d33-0577-4ea1-999d-163f900e1e86 requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Mar 12 00:25:23.632: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Mar 12 00:25:23.645: INFO: Creating a pod which consumes cpu=11060m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-87e5d456-4584-4277-8c2c-d04c2baf8c86.15fb66ccc7e6a6e8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3762/filler-pod-87e5d456-4584-4277-8c2c-d04c2baf8c86 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-87e5d456-4584-4277-8c2c-d04c2baf8c86.15fb66ccf527e777], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-87e5d456-4584-4277-8c2c-d04c2baf8c86.15fb66cd072df021], Reason = [Created], Message = [Created container filler-pod-87e5d456-4584-4277-8c2c-d04c2baf8c86] STEP: Considering event: Type = [Normal], Name = [filler-pod-87e5d456-4584-4277-8c2c-d04c2baf8c86.15fb66cd175016e8], Reason = [Started], Message = [Started container filler-pod-87e5d456-4584-4277-8c2c-d04c2baf8c86] STEP: Considering event: Type = [Normal], Name = [filler-pod-b0e4faee-3184-4bfa-aa78-c0abc384a0e4.15fb66ccc873ed98], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3762/filler-pod-b0e4faee-3184-4bfa-aa78-c0abc384a0e4 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-b0e4faee-3184-4bfa-aa78-c0abc384a0e4.15fb66ccf71d4115], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b0e4faee-3184-4bfa-aa78-c0abc384a0e4.15fb66cd068f381e], Reason = [Created], Message = [Created container filler-pod-b0e4faee-3184-4bfa-aa78-c0abc384a0e4] STEP: Considering event: Type = [Normal], Name = [filler-pod-b0e4faee-3184-4bfa-aa78-c0abc384a0e4.15fb66cd15843f62], Reason = [Started], Message = [Started container filler-pod-b0e4faee-3184-4bfa-aa78-c0abc384a0e4] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fb66cdb7c46bcd], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:28.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3762" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:5.356 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":280,"completed":166,"skipped":2461,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:28.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 12 00:25:28.928: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3649 /api/v1/namespaces/watch-3649/configmaps/e2e-watch-test-resource-version 20faa83f-d421-4be0-bfb4-b51f2253efee 941901 0 2020-03-12 00:25:28 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:25:28.928: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3649 /api/v1/namespaces/watch-3649/configmaps/e2e-watch-test-resource-version 20faa83f-d421-4be0-bfb4-b51f2253efee 941902 0 2020-03-12 00:25:28 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:28.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3649" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":167,"skipped":2461,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:28.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 12 00:25:28.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ef738ea-48a6-4a60-967c-f038f9d7d64a" in namespace "downward-api-4127" to be "success or failure" Mar 12 00:25:29.003: INFO: Pod "downwardapi-volume-4ef738ea-48a6-4a60-967c-f038f9d7d64a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060688ms Mar 12 00:25:31.006: INFO: Pod "downwardapi-volume-4ef738ea-48a6-4a60-967c-f038f9d7d64a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007108253s STEP: Saw pod success Mar 12 00:25:31.006: INFO: Pod "downwardapi-volume-4ef738ea-48a6-4a60-967c-f038f9d7d64a" satisfied condition "success or failure" Mar 12 00:25:31.008: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4ef738ea-48a6-4a60-967c-f038f9d7d64a container client-container: STEP: delete the pod Mar 12 00:25:31.050: INFO: Waiting for pod downwardapi-volume-4ef738ea-48a6-4a60-967c-f038f9d7d64a to disappear Mar 12 00:25:31.077: INFO: Pod downwardapi-volume-4ef738ea-48a6-4a60-967c-f038f9d7d64a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:31.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4127" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":168,"skipped":2470,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:31.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 12 00:25:31.165: INFO: >>> kubeConfig: /root/.kube/config Mar 12 00:25:34.019: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:44.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7980" for this suite. • [SLOW TEST:13.575 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":169,"skipped":2496,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:44.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 12 00:25:44.760: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d47b83f9-0acc-4347-b278-53814262623f" in namespace "projected-908" to be "success or failure" Mar 12 00:25:44.783: INFO: Pod "downwardapi-volume-d47b83f9-0acc-4347-b278-53814262623f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.012172ms Mar 12 00:25:46.787: INFO: Pod "downwardapi-volume-d47b83f9-0acc-4347-b278-53814262623f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026754484s Mar 12 00:25:48.790: INFO: Pod "downwardapi-volume-d47b83f9-0acc-4347-b278-53814262623f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030427509s STEP: Saw pod success Mar 12 00:25:48.790: INFO: Pod "downwardapi-volume-d47b83f9-0acc-4347-b278-53814262623f" satisfied condition "success or failure" Mar 12 00:25:48.793: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d47b83f9-0acc-4347-b278-53814262623f container client-container: STEP: delete the pod Mar 12 00:25:48.832: INFO: Waiting for pod downwardapi-volume-d47b83f9-0acc-4347-b278-53814262623f to disappear Mar 12 00:25:48.836: INFO: Pod downwardapi-volume-d47b83f9-0acc-4347-b278-53814262623f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:48.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-908" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":170,"skipped":2501,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:48.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-ef0ffa25-0db7-4516-80c9-44f909aa698a STEP: Creating a pod to test consume secrets Mar 12 00:25:48.963: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6c5d27f6-e87f-4432-a947-01f289bb687d" in namespace "projected-6435" to be "success or failure" Mar 12 00:25:48.974: INFO: Pod "pod-projected-secrets-6c5d27f6-e87f-4432-a947-01f289bb687d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662047ms Mar 12 00:25:50.977: INFO: Pod "pod-projected-secrets-6c5d27f6-e87f-4432-a947-01f289bb687d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013997356s STEP: Saw pod success Mar 12 00:25:50.977: INFO: Pod "pod-projected-secrets-6c5d27f6-e87f-4432-a947-01f289bb687d" satisfied condition "success or failure" Mar 12 00:25:50.979: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-6c5d27f6-e87f-4432-a947-01f289bb687d container projected-secret-volume-test: STEP: delete the pod Mar 12 00:25:50.999: INFO: Waiting for pod pod-projected-secrets-6c5d27f6-e87f-4432-a947-01f289bb687d to disappear Mar 12 00:25:51.009: INFO: Pod pod-projected-secrets-6c5d27f6-e87f-4432-a947-01f289bb687d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:51.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6435" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":171,"skipped":2504,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:51.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1297.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1297.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1297.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1297.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1297.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1297.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 00:25:55.207: INFO: DNS probes using dns-1297/dns-test-5e2e400c-3ee9-4180-9dc5-b2ba516437b5 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:25:55.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1297" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":172,"skipped":2515,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:25:55.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 00:25:55.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6610' Mar 12 00:25:55.606: INFO: stderr: "" Mar 12 00:25:55.606: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868 Mar 12 00:25:55.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6610' Mar 12 00:26:02.484: INFO: stderr: "" Mar 12 00:26:02.484: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:02.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6610" for this suite. • [SLOW TEST:7.035 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1859 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":280,"completed":173,"skipped":2550,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:02.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-3f24e579-780a-430b-808d-afeab6e3bd5f STEP: Creating a pod to test consume configMaps Mar 12 00:26:02.600: INFO: Waiting up to 5m0s for pod "pod-configmaps-76539c28-1d0f-4198-b500-22d6d4390f6b" in namespace "configmap-4318" to be "success or failure" Mar 12 00:26:02.603: INFO: Pod "pod-configmaps-76539c28-1d0f-4198-b500-22d6d4390f6b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.051585ms Mar 12 00:26:04.607: INFO: Pod "pod-configmaps-76539c28-1d0f-4198-b500-22d6d4390f6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006893049s STEP: Saw pod success Mar 12 00:26:04.607: INFO: Pod "pod-configmaps-76539c28-1d0f-4198-b500-22d6d4390f6b" satisfied condition "success or failure" Mar 12 00:26:04.609: INFO: Trying to get logs from node latest-worker pod pod-configmaps-76539c28-1d0f-4198-b500-22d6d4390f6b container configmap-volume-test: STEP: delete the pod Mar 12 00:26:04.636: INFO: Waiting for pod pod-configmaps-76539c28-1d0f-4198-b500-22d6d4390f6b to disappear Mar 12 00:26:04.675: INFO: Pod pod-configmaps-76539c28-1d0f-4198-b500-22d6d4390f6b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:04.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4318" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":174,"skipped":2608,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:04.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 12 00:26:04.749: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82dad5eb-1416-43c5-83d4-26a854c32fa6" in namespace "projected-3143" to be "success or failure" Mar 12 00:26:04.764: INFO: Pod "downwardapi-volume-82dad5eb-1416-43c5-83d4-26a854c32fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.87767ms Mar 12 00:26:06.772: INFO: Pod "downwardapi-volume-82dad5eb-1416-43c5-83d4-26a854c32fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023317922s Mar 12 00:26:08.776: INFO: Pod "downwardapi-volume-82dad5eb-1416-43c5-83d4-26a854c32fa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027042271s STEP: Saw pod success Mar 12 00:26:08.776: INFO: Pod "downwardapi-volume-82dad5eb-1416-43c5-83d4-26a854c32fa6" satisfied condition "success or failure" Mar 12 00:26:08.779: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-82dad5eb-1416-43c5-83d4-26a854c32fa6 container client-container: STEP: delete the pod Mar 12 00:26:08.796: INFO: Waiting for pod downwardapi-volume-82dad5eb-1416-43c5-83d4-26a854c32fa6 to disappear Mar 12 00:26:08.818: INFO: Pod downwardapi-volume-82dad5eb-1416-43c5-83d4-26a854c32fa6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:08.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3143" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":175,"skipped":2630,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:08.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-7ebce23f-a28c-47ea-b0e1-89fb82fdc42f STEP: Creating configMap with name cm-test-opt-upd-b80a2ece-b162-4d80-a9ff-688f3d3cddb9 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7ebce23f-a28c-47ea-b0e1-89fb82fdc42f STEP: Updating configmap cm-test-opt-upd-b80a2ece-b162-4d80-a9ff-688f3d3cddb9 STEP: Creating configMap with name cm-test-opt-create-4b27c4f1-e6a3-4ec2-acba-a5ea55c19310 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:15.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3539" for this suite. • [SLOW TEST:6.181 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":176,"skipped":2645,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:15.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 12 00:26:15.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-334688f0-53ac-47a8-aebe-1f4943a7f123" in namespace "downward-api-527" to be "success or failure" Mar 12 00:26:15.110: INFO: Pod "downwardapi-volume-334688f0-53ac-47a8-aebe-1f4943a7f123": Phase="Pending", Reason="", readiness=false. Elapsed: 14.684284ms Mar 12 00:26:17.114: INFO: Pod "downwardapi-volume-334688f0-53ac-47a8-aebe-1f4943a7f123": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018449838s STEP: Saw pod success Mar 12 00:26:17.114: INFO: Pod "downwardapi-volume-334688f0-53ac-47a8-aebe-1f4943a7f123" satisfied condition "success or failure" Mar 12 00:26:17.117: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-334688f0-53ac-47a8-aebe-1f4943a7f123 container client-container: STEP: delete the pod Mar 12 00:26:17.138: INFO: Waiting for pod downwardapi-volume-334688f0-53ac-47a8-aebe-1f4943a7f123 to disappear Mar 12 00:26:17.143: INFO: Pod downwardapi-volume-334688f0-53ac-47a8-aebe-1f4943a7f123 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:17.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-527" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":177,"skipped":2650,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:17.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:17.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-986" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":280,"completed":178,"skipped":2650,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:17.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name secret-emptykey-test-0e7e0c5d-2fb5-4400-8807-bfb9e77f9adc [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:17.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1162" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":179,"skipped":2705,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:17.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 12 00:26:17.478: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:26:17.496: INFO: Number of nodes with available pods: 0 Mar 12 00:26:17.496: INFO: Node latest-worker is running more than one daemon pod Mar 12 00:26:18.499: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:26:18.501: INFO: Number of nodes with available pods: 0 Mar 12 00:26:18.501: INFO: Node latest-worker is running more than one daemon pod Mar 12 00:26:19.500: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:26:19.504: INFO: Number of nodes with available pods: 2 Mar 12 00:26:19.504: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 12 00:26:19.521: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:26:19.524: INFO: Number of nodes with available pods: 1 Mar 12 00:26:19.524: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 00:26:20.533: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:26:20.550: INFO: Number of nodes with available pods: 1 Mar 12 00:26:20.550: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 00:26:21.529: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:26:21.533: INFO: Number of nodes with available pods: 1 Mar 12 00:26:21.533: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 00:26:22.534: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:26:22.537: INFO: Number of nodes with available pods: 1 Mar 12 00:26:22.537: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 00:26:23.528: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:26:23.530: INFO: Number of nodes with available pods: 1 Mar 12 00:26:23.530: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 00:26:24.528: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:26:24.532: INFO: Number of nodes with available pods: 1 Mar 12 00:26:24.532: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 00:26:25.528: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:26:25.531: INFO: Number of nodes with available pods: 2 Mar 12 00:26:25.531: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8956, will wait for the garbage collector to delete the pods Mar 12 00:26:25.590: INFO: Deleting DaemonSet.extensions daemon-set took: 5.455492ms Mar 12 00:26:25.891: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265359ms Mar 12 00:26:32.594: INFO: Number of nodes with available pods: 0 Mar 12 00:26:32.594: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 00:26:32.596: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8956/daemonsets","resourceVersion":"942481"},"items":null} Mar 12 00:26:32.598: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8956/pods","resourceVersion":"942481"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:32.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8956" for this suite. • [SLOW TEST:15.283 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":180,"skipped":2719,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:32.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating server pod server in namespace prestop-696 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-696 STEP: Deleting pre-stop pod Mar 12 00:26:41.780: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:41.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-696" for this suite. • [SLOW TEST:9.173 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":280,"completed":181,"skipped":2725,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:41.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 00:26:42.606: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 00:26:44.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569602, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569602, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569602, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569602, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:26:47.659: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 12 00:26:49.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config attach --namespace=webhook-8373 to-be-attached-pod -i -c=container1' Mar 12 00:26:49.855: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:49.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8373" for this suite. STEP: Destroying namespace "webhook-8373-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.119 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":182,"skipped":2726,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:49.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 12 00:26:50.016: INFO: Waiting up to 5m0s for pod "pod-9b5a10bf-9b1f-42c9-98cf-0304a74b4ecd" in namespace "emptydir-9458" to be "success or failure" Mar 12 00:26:50.018: INFO: Pod "pod-9b5a10bf-9b1f-42c9-98cf-0304a74b4ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030331ms Mar 12 00:26:52.021: INFO: Pod "pod-9b5a10bf-9b1f-42c9-98cf-0304a74b4ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005656695s Mar 12 00:26:54.025: INFO: Pod "pod-9b5a10bf-9b1f-42c9-98cf-0304a74b4ecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009334461s STEP: Saw pod success Mar 12 00:26:54.025: INFO: Pod "pod-9b5a10bf-9b1f-42c9-98cf-0304a74b4ecd" satisfied condition "success or failure" Mar 12 00:26:54.028: INFO: Trying to get logs from node latest-worker pod pod-9b5a10bf-9b1f-42c9-98cf-0304a74b4ecd container test-container: STEP: delete the pod Mar 12 00:26:54.055: INFO: Waiting for pod pod-9b5a10bf-9b1f-42c9-98cf-0304a74b4ecd to disappear Mar 12 00:26:54.060: INFO: Pod pod-9b5a10bf-9b1f-42c9-98cf-0304a74b4ecd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:54.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9458" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":183,"skipped":2726,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:54.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-f9c813e6-7ed7-47fc-8738-a31279051b7b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-f9c813e6-7ed7-47fc-8738-a31279051b7b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:26:58.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-655" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":184,"skipped":2747,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:26:58.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-zw2kb in namespace proxy-4074 I0312 00:26:58.359525 7 runners.go:189] Created replication controller with name: proxy-service-zw2kb, namespace: proxy-4074, replica count: 1 I0312 00:26:59.409933 7 runners.go:189] proxy-service-zw2kb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0312 00:27:00.410159 7 runners.go:189] proxy-service-zw2kb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 00:27:01.410410 7 runners.go:189] proxy-service-zw2kb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 00:27:02.410660 7 runners.go:189] proxy-service-zw2kb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 00:27:03.410893 7 runners.go:189] proxy-service-zw2kb Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 00:27:03.419: INFO: setup took 5.099371594s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 12 00:27:03.425: INFO: (0) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 5.890182ms) Mar 12 00:27:03.425: INFO: (0) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 5.81868ms) Mar 12 00:27:03.425: INFO: (0) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 5.45993ms) Mar 12 00:27:03.426: INFO: (0) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 6.091122ms) Mar 12 00:27:03.426: INFO: (0) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:1080/proxy/: ... (200; 6.175958ms) Mar 12 00:27:03.433: INFO: (0) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 12.93732ms) Mar 12 00:27:03.438: INFO: (0) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 19.172373ms) Mar 12 00:27:03.439: INFO: (0) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 19.310831ms) Mar 12 00:27:03.440: INFO: (0) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 20.697017ms) Mar 12 00:27:03.440: INFO: (0) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 20.861757ms) Mar 12 00:27:03.441: INFO: (0) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 21.567301ms) Mar 12 00:27:03.442: INFO: (0) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: test (200; 8.1529ms) Mar 12 00:27:03.455: INFO: (1) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 8.106493ms) Mar 12 00:27:03.455: INFO: (1) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 8.385021ms) Mar 12 00:27:03.456: INFO: (1) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 9.155211ms) Mar 12 00:27:03.456: INFO: (1) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 9.944113ms) Mar 12 00:27:03.457: INFO: (1) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 9.933199ms) Mar 12 00:27:03.457: INFO: (1) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 9.98069ms) Mar 12 00:27:03.457: INFO: (1) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 10.232433ms) Mar 12 00:27:03.457: INFO: (1) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 10.685829ms) Mar 12 00:27:03.458: INFO: (1) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 10.635496ms) Mar 12 00:27:03.458: INFO: (1) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 10.583426ms) Mar 12 00:27:03.462: INFO: (2) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 4.2973ms) Mar 12 00:27:03.462: INFO: (2) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 4.33252ms) Mar 12 00:27:03.462: INFO: (2) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 4.443849ms) Mar 12 00:27:03.462: INFO: (2) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 5.823982ms) Mar 12 00:27:03.464: INFO: (2) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 5.895991ms) Mar 12 00:27:03.464: INFO: (2) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 6.385426ms) Mar 12 00:27:03.464: INFO: (2) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 6.629461ms) Mar 12 00:27:03.464: INFO: (2) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 6.616229ms) Mar 12 00:27:03.465: INFO: (2) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 7.32815ms) Mar 12 00:27:03.465: INFO: (2) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 7.600319ms) Mar 12 00:27:03.465: INFO: (2) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 7.648143ms) Mar 12 00:27:03.465: INFO: (2) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 7.778795ms) Mar 12 00:27:03.465: INFO: (2) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 7.625489ms) Mar 12 00:27:03.465: INFO: (2) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 7.694337ms) Mar 12 00:27:03.467: INFO: (3) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 2.056503ms) Mar 12 00:27:03.469: INFO: (3) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 3.096628ms) Mar 12 00:27:03.469: INFO: (3) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 3.203673ms) Mar 12 00:27:03.469: INFO: (3) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: test (200; 4.036545ms) Mar 12 00:27:03.470: INFO: (3) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 4.185985ms) Mar 12 00:27:03.470: INFO: (3) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:1080/proxy/: ... (200; 4.328934ms) Mar 12 00:27:03.470: INFO: (3) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 4.446881ms) Mar 12 00:27:03.470: INFO: (3) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 4.459974ms) Mar 12 00:27:03.470: INFO: (3) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 4.61678ms) Mar 12 00:27:03.472: INFO: (3) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 6.314685ms) Mar 12 00:27:03.472: INFO: (3) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 6.230624ms) Mar 12 00:27:03.472: INFO: (3) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 6.283718ms) Mar 12 00:27:03.472: INFO: (3) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 6.226879ms) Mar 12 00:27:03.472: INFO: (3) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 6.303006ms) Mar 12 00:27:03.472: INFO: (3) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 6.30561ms) Mar 12 00:27:03.504: INFO: (4) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 32.506082ms) Mar 12 00:27:03.504: INFO: (4) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 32.472275ms) Mar 12 00:27:03.505: INFO: (4) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 33.367334ms) Mar 12 00:27:03.506: INFO: (4) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 34.414054ms) Mar 12 00:27:03.506: INFO: (4) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 34.406573ms) Mar 12 00:27:03.506: INFO: (4) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 34.449707ms) Mar 12 00:27:03.507: INFO: (4) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 34.58507ms) Mar 12 00:27:03.507: INFO: (4) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 34.652015ms) Mar 12 00:27:03.507: INFO: (4) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 34.586438ms) Mar 12 00:27:03.507: INFO: (4) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 34.685044ms) Mar 12 00:27:03.507: INFO: (4) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 34.633525ms) Mar 12 00:27:03.507: INFO: (4) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 34.621125ms) Mar 12 00:27:03.515: INFO: (5) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 8.147563ms) Mar 12 00:27:03.515: INFO: (5) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 8.583263ms) Mar 12 00:27:03.515: INFO: (5) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 8.599059ms) Mar 12 00:27:03.516: INFO: (5) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 8.76611ms) Mar 12 00:27:03.516: INFO: (5) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 8.891878ms) Mar 12 00:27:03.516: INFO: (5) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 8.774789ms) Mar 12 00:27:03.516: INFO: (5) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 8.814153ms) Mar 12 00:27:03.516: INFO: (5) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 8.914534ms) Mar 12 00:27:03.517: INFO: (5) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 9.94701ms) Mar 12 00:27:03.517: INFO: (5) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 10.005214ms) Mar 12 00:27:03.517: INFO: (5) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 10.038852ms) Mar 12 00:27:03.517: INFO: (5) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 10.093796ms) Mar 12 00:27:03.517: INFO: (5) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 10.18776ms) Mar 12 00:27:03.520: INFO: (6) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 2.835794ms) Mar 12 00:27:03.520: INFO: (6) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 3.140662ms) Mar 12 00:27:03.520: INFO: (6) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 5.4055ms) Mar 12 00:27:03.523: INFO: (6) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 5.464881ms) Mar 12 00:27:03.523: INFO: (6) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 5.859305ms) Mar 12 00:27:03.523: INFO: (6) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 5.740938ms) Mar 12 00:27:03.523: INFO: (6) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 5.730406ms) Mar 12 00:27:03.525: INFO: (7) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 1.686431ms) Mar 12 00:27:03.526: INFO: (7) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 3.005277ms) Mar 12 00:27:03.526: INFO: (7) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 3.267891ms) Mar 12 00:27:03.526: INFO: (7) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 3.242793ms) Mar 12 00:27:03.526: INFO: (7) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:1080/proxy/: ... (200; 3.246229ms) Mar 12 00:27:03.526: INFO: (7) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 3.23158ms) Mar 12 00:27:03.526: INFO: (7) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 3.270478ms) Mar 12 00:27:03.526: INFO: (7) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 3.427552ms) Mar 12 00:27:03.528: INFO: (7) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 4.658681ms) Mar 12 00:27:03.528: INFO: (7) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 4.6155ms) Mar 12 00:27:03.528: INFO: (7) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 4.699369ms) Mar 12 00:27:03.528: INFO: (7) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 4.707625ms) Mar 12 00:27:03.528: INFO: (7) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 4.777551ms) Mar 12 00:27:03.528: INFO: (7) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 4.925185ms) Mar 12 00:27:03.528: INFO: (7) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 4.857469ms) Mar 12 00:27:03.529: INFO: (7) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 5.407991ms) Mar 12 00:27:03.534: INFO: (8) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 5.449054ms) Mar 12 00:27:03.534: INFO: (8) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 5.434511ms) Mar 12 00:27:03.534: INFO: (8) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 5.567589ms) Mar 12 00:27:03.536: INFO: (8) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 6.934612ms) Mar 12 00:27:03.536: INFO: (8) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 6.998825ms) Mar 12 00:27:03.536: INFO: (8) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 7.425179ms) Mar 12 00:27:03.536: INFO: (8) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 7.519691ms) Mar 12 00:27:03.536: INFO: (8) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 7.621894ms) Mar 12 00:27:03.538: INFO: (9) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 1.646492ms) Mar 12 00:27:03.538: INFO: (9) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 1.602923ms) Mar 12 00:27:03.539: INFO: (9) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 2.637671ms) Mar 12 00:27:03.539: INFO: (9) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 2.87237ms) Mar 12 00:27:03.539: INFO: (9) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 2.913469ms) Mar 12 00:27:03.542: INFO: (9) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 5.797645ms) Mar 12 00:27:03.542: INFO: (9) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 5.958648ms) Mar 12 00:27:03.543: INFO: (9) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 6.199451ms) Mar 12 00:27:03.543: INFO: (9) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 6.237397ms) Mar 12 00:27:03.543: INFO: (9) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 6.279825ms) Mar 12 00:27:03.544: INFO: (9) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 7.215453ms) Mar 12 00:27:03.544: INFO: (9) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 7.292531ms) Mar 12 00:27:03.544: INFO: (9) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 7.556622ms) Mar 12 00:27:03.544: INFO: (9) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 7.565909ms) Mar 12 00:27:03.547: INFO: (10) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: test<... (200; 4.744137ms) Mar 12 00:27:03.549: INFO: (10) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 4.775048ms) Mar 12 00:27:03.549: INFO: (10) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 4.764805ms) Mar 12 00:27:03.549: INFO: (10) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:1080/proxy/: ... (200; 4.75563ms) Mar 12 00:27:03.549: INFO: (10) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 4.778006ms) Mar 12 00:27:03.549: INFO: (10) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 4.834112ms) Mar 12 00:27:03.549: INFO: (10) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 4.796159ms) Mar 12 00:27:03.549: INFO: (10) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 4.853375ms) Mar 12 00:27:03.551: INFO: (11) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 2.261404ms) Mar 12 00:27:03.551: INFO: (11) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 2.340917ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 4.43153ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 4.437385ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 4.471319ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 4.758114ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 4.903895ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 5.150471ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 5.175714ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 5.196417ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 5.187559ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 5.273272ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 5.377587ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 5.404871ms) Mar 12 00:27:03.554: INFO: (11) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 5.416881ms) Mar 12 00:27:03.556: INFO: (12) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 3.657907ms) Mar 12 00:27:03.558: INFO: (12) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 3.650833ms) Mar 12 00:27:03.558: INFO: (12) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 3.710373ms) Mar 12 00:27:03.558: INFO: (12) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 3.671448ms) Mar 12 00:27:03.558: INFO: (12) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 3.688027ms) Mar 12 00:27:03.558: INFO: (12) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 3.727449ms) Mar 12 00:27:03.558: INFO: (12) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 3.811223ms) Mar 12 00:27:03.561: INFO: (13) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 2.511863ms) Mar 12 00:27:03.561: INFO: (13) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 2.616571ms) Mar 12 00:27:03.561: INFO: (13) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 2.609899ms) Mar 12 00:27:03.561: INFO: (13) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 2.732915ms) Mar 12 00:27:03.561: INFO: (13) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 2.784934ms) Mar 12 00:27:03.561: INFO: (13) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 3.069953ms) Mar 12 00:27:03.562: INFO: (13) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 3.104893ms) Mar 12 00:27:03.562: INFO: (13) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 3.300196ms) Mar 12 00:27:03.562: INFO: (13) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 3.519749ms) Mar 12 00:27:03.562: INFO: (13) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 3.535474ms) Mar 12 00:27:03.562: INFO: (13) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 3.549287ms) Mar 12 00:27:03.562: INFO: (13) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 3.688849ms) Mar 12 00:27:03.562: INFO: (13) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 3.800705ms) Mar 12 00:27:03.565: INFO: (14) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 2.506142ms) Mar 12 00:27:03.565: INFO: (14) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 2.545492ms) Mar 12 00:27:03.565: INFO: (14) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 2.876994ms) Mar 12 00:27:03.565: INFO: (14) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 3.009456ms) Mar 12 00:27:03.566: INFO: (14) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 3.167893ms) Mar 12 00:27:03.567: INFO: (14) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 4.800157ms) Mar 12 00:27:03.567: INFO: (14) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 4.773769ms) Mar 12 00:27:03.567: INFO: (14) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 4.823327ms) Mar 12 00:27:03.567: INFO: (14) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:1080/proxy/: ... (200; 4.829561ms) Mar 12 00:27:03.568: INFO: (14) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 5.185887ms) Mar 12 00:27:03.568: INFO: (14) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: test<... (200; 8.117113ms) Mar 12 00:27:03.583: INFO: (15) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 8.216405ms) Mar 12 00:27:03.583: INFO: (15) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 8.406437ms) Mar 12 00:27:03.583: INFO: (15) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:1080/proxy/: ... (200; 8.461597ms) Mar 12 00:27:03.583: INFO: (15) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 8.400174ms) Mar 12 00:27:03.583: INFO: (15) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 6.350232ms) Mar 12 00:27:03.590: INFO: (16) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 6.317465ms) Mar 12 00:27:03.590: INFO: (16) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 6.406299ms) Mar 12 00:27:03.590: INFO: (16) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 6.36314ms) Mar 12 00:27:03.590: INFO: (16) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 6.421295ms) Mar 12 00:27:03.590: INFO: (16) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 6.366993ms) Mar 12 00:27:03.590: INFO: (16) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 6.383587ms) Mar 12 00:27:03.590: INFO: (16) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 6.450757ms) Mar 12 00:27:03.590: INFO: (16) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 6.562745ms) Mar 12 00:27:03.594: INFO: (17) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: test<... (200; 4.414194ms) Mar 12 00:27:03.594: INFO: (17) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:1080/proxy/: ... (200; 4.345349ms) Mar 12 00:27:03.594: INFO: (17) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 4.412058ms) Mar 12 00:27:03.594: INFO: (17) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 4.407878ms) Mar 12 00:27:03.594: INFO: (17) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 4.538093ms) Mar 12 00:27:03.594: INFO: (17) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 4.569272ms) Mar 12 00:27:03.594: INFO: (17) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 4.52645ms) Mar 12 00:27:03.594: INFO: (17) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 4.623895ms) Mar 12 00:27:03.595: INFO: (17) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 4.715087ms) Mar 12 00:27:03.595: INFO: (17) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 4.781582ms) Mar 12 00:27:03.595: INFO: (17) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 4.756165ms) Mar 12 00:27:03.597: INFO: (17) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 6.784575ms) Mar 12 00:27:03.599: INFO: (18) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 2.830682ms) Mar 12 00:27:03.600: INFO: (18) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 3.061345ms) Mar 12 00:27:03.600: INFO: (18) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 3.648507ms) Mar 12 00:27:03.600: INFO: (18) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp/proxy/: test (200; 3.639782ms) Mar 12 00:27:03.600: INFO: (18) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 3.744968ms) Mar 12 00:27:03.600: INFO: (18) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 3.738699ms) Mar 12 00:27:03.600: INFO: (18) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 3.648817ms) Mar 12 00:27:03.600: INFO: (18) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: ... (200; 3.778566ms) Mar 12 00:27:03.600: INFO: (18) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 3.738653ms) Mar 12 00:27:03.600: INFO: (18) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 3.741765ms) Mar 12 00:27:03.600: INFO: (18) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 3.691721ms) Mar 12 00:27:03.601: INFO: (18) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 4.148458ms) Mar 12 00:27:03.601: INFO: (18) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 4.258046ms) Mar 12 00:27:03.631: INFO: (19) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 30.04146ms) Mar 12 00:27:03.631: INFO: (19) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 30.082637ms) Mar 12 00:27:03.631: INFO: (19) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:162/proxy/: bar (200; 30.179219ms) Mar 12 00:27:03.631: INFO: (19) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:460/proxy/: tls baz (200; 30.146127ms) Mar 12 00:27:03.631: INFO: (19) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:443/proxy/: test (200; 30.250463ms) Mar 12 00:27:03.631: INFO: (19) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:160/proxy/: foo (200; 30.290999ms) Mar 12 00:27:03.631: INFO: (19) /api/v1/namespaces/proxy-4074/pods/http:proxy-service-zw2kb-md9sp:1080/proxy/: ... (200; 30.331698ms) Mar 12 00:27:03.631: INFO: (19) /api/v1/namespaces/proxy-4074/pods/proxy-service-zw2kb-md9sp:1080/proxy/: test<... (200; 30.282541ms) Mar 12 00:27:03.631: INFO: (19) /api/v1/namespaces/proxy-4074/pods/https:proxy-service-zw2kb-md9sp:462/proxy/: tls qux (200; 30.294157ms) Mar 12 00:27:03.631: INFO: (19) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname1/proxy/: foo (200; 30.398655ms) Mar 12 00:27:03.633: INFO: (19) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname1/proxy/: tls baz (200; 31.88641ms) Mar 12 00:27:03.633: INFO: (19) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname1/proxy/: foo (200; 32.479791ms) Mar 12 00:27:03.633: INFO: (19) /api/v1/namespaces/proxy-4074/services/http:proxy-service-zw2kb:portname2/proxy/: bar (200; 32.520235ms) Mar 12 00:27:03.634: INFO: (19) /api/v1/namespaces/proxy-4074/services/proxy-service-zw2kb:portname2/proxy/: bar (200; 32.77417ms) Mar 12 00:27:03.634: INFO: (19) /api/v1/namespaces/proxy-4074/services/https:proxy-service-zw2kb:tlsportname2/proxy/: tls qux (200; 32.858791ms) STEP: deleting ReplicationController proxy-service-zw2kb in namespace proxy-4074, will wait for the garbage collector to delete the pods Mar 12 00:27:03.689: INFO: Deleting ReplicationController proxy-service-zw2kb took: 3.352853ms Mar 12 00:27:03.989: INFO: Terminating ReplicationController proxy-service-zw2kb pods took: 300.136688ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:27:12.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4074" for this suite. • [SLOW TEST:14.348 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":280,"completed":185,"skipped":2759,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:27:12.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-69f32cff-ec1e-4ff4-bfdd-10791cea3341 STEP: Creating a pod to test consume configMaps Mar 12 00:27:12.681: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd0076b8-fd51-48fa-8cc3-c604626faca9" in namespace "configmap-3475" to be "success or failure" Mar 12 00:27:12.692: INFO: Pod "pod-configmaps-bd0076b8-fd51-48fa-8cc3-c604626faca9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.747412ms Mar 12 00:27:14.696: INFO: Pod "pod-configmaps-bd0076b8-fd51-48fa-8cc3-c604626faca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014671772s STEP: Saw pod success Mar 12 00:27:14.696: INFO: Pod "pod-configmaps-bd0076b8-fd51-48fa-8cc3-c604626faca9" satisfied condition "success or failure" Mar 12 00:27:14.699: INFO: Trying to get logs from node latest-worker pod pod-configmaps-bd0076b8-fd51-48fa-8cc3-c604626faca9 container configmap-volume-test: STEP: delete the pod Mar 12 00:27:14.745: INFO: Waiting for pod pod-configmaps-bd0076b8-fd51-48fa-8cc3-c604626faca9 to disappear Mar 12 00:27:14.756: INFO: Pod pod-configmaps-bd0076b8-fd51-48fa-8cc3-c604626faca9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:27:14.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3475" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":186,"skipped":2769,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:27:14.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:27:14.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 12 00:27:15.432: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T00:27:15Z generation:1 name:name1 resourceVersion:942867 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c650d794-babe-4970-b5be-06586aa37024] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 12 00:27:25.437: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T00:27:25Z generation:1 name:name2 resourceVersion:942922 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c9468b55-87d4-442e-9f50-6393cc97b2ab] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 12 00:27:35.443: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T00:27:15Z generation:2 name:name1 resourceVersion:942952 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c650d794-babe-4970-b5be-06586aa37024] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 12 00:27:45.449: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T00:27:25Z generation:2 name:name2 resourceVersion:942982 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c9468b55-87d4-442e-9f50-6393cc97b2ab] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 12 00:27:55.456: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T00:27:15Z generation:2 name:name1 resourceVersion:943012 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c650d794-babe-4970-b5be-06586aa37024] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 12 00:28:05.463: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T00:27:25Z generation:2 name:name2 resourceVersion:943042 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c9468b55-87d4-442e-9f50-6393cc97b2ab] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:28:15.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4326" for this suite. • [SLOW TEST:61.217 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":187,"skipped":2797,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:28:15.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap that has name configmap-test-emptyKey-c4d8f712-140f-4799-b508-e98012bb2c1a [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:28:16.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6359" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":188,"skipped":2844,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:28:16.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:28:23.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6825" for this suite. • [SLOW TEST:7.061 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":189,"skipped":2858,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:28:23.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 12 00:28:23.186: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 12 00:28:32.215: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:28:32.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8512" for this suite. • [SLOW TEST:9.097 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":190,"skipped":2863,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:28:32.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:28:32.287: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-742 I0312 00:28:32.344168 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-742, replica count: 1 I0312 00:28:33.394527 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0312 00:28:34.394687 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 00:28:34.519: INFO: Created: latency-svc-fknrg Mar 12 00:28:34.527: INFO: Got endpoints: latency-svc-fknrg [32.603394ms] Mar 12 00:28:34.601: INFO: Created: latency-svc-qnsm9 Mar 12 00:28:34.613: INFO: Got endpoints: latency-svc-qnsm9 [85.809636ms] Mar 12 00:28:34.637: INFO: Created: latency-svc-pttw4 Mar 12 00:28:34.646: INFO: Got endpoints: latency-svc-pttw4 [118.72802ms] Mar 12 00:28:34.670: INFO: Created: latency-svc-vs8lg Mar 12 00:28:34.676: INFO: Got endpoints: latency-svc-vs8lg [148.758852ms] Mar 12 00:28:34.699: INFO: Created: latency-svc-t2jcp Mar 12 00:28:34.732: INFO: Got endpoints: latency-svc-t2jcp [204.671864ms] Mar 12 00:28:34.748: INFO: Created: latency-svc-5l6zd Mar 12 00:28:34.753: INFO: Got endpoints: latency-svc-5l6zd [225.937665ms] Mar 12 00:28:34.772: INFO: Created: latency-svc-pzbfs Mar 12 00:28:34.778: INFO: Got endpoints: latency-svc-pzbfs [250.395598ms] Mar 12 00:28:34.798: INFO: Created: latency-svc-b4js5 Mar 12 00:28:34.801: INFO: Got endpoints: latency-svc-b4js5 [273.799791ms] Mar 12 00:28:34.817: INFO: Created: latency-svc-c6kth Mar 12 00:28:34.864: INFO: Got endpoints: latency-svc-c6kth [336.608716ms] Mar 12 00:28:34.865: INFO: Created: latency-svc-k7p8g Mar 12 00:28:34.885: INFO: Created: latency-svc-dwmjj Mar 12 00:28:34.885: INFO: Got endpoints: latency-svc-k7p8g [357.762293ms] Mar 12 00:28:34.904: INFO: Got endpoints: latency-svc-dwmjj [376.187392ms] Mar 12 00:28:34.921: INFO: Created: latency-svc-kz9kb Mar 12 00:28:34.928: INFO: Got endpoints: latency-svc-kz9kb [400.60232ms] Mar 12 00:28:34.946: INFO: Created: latency-svc-qxf8s Mar 12 00:28:34.952: INFO: Got endpoints: latency-svc-qxf8s [424.596101ms] Mar 12 00:28:34.989: INFO: Created: latency-svc-69cbz Mar 12 00:28:35.008: INFO: Created: latency-svc-4v7vv Mar 12 00:28:35.009: INFO: Got endpoints: latency-svc-69cbz [481.414084ms] Mar 12 00:28:35.017: INFO: Got endpoints: latency-svc-4v7vv [490.194823ms] Mar 12 00:28:35.039: INFO: Created: latency-svc-nzb2b Mar 12 00:28:35.048: INFO: Got endpoints: latency-svc-nzb2b [520.469718ms] Mar 12 00:28:35.071: INFO: Created: latency-svc-5jxdr Mar 12 00:28:35.077: INFO: Got endpoints: latency-svc-5jxdr [464.392903ms] Mar 12 00:28:35.122: INFO: Created: latency-svc-75zsw Mar 12 00:28:35.143: INFO: Created: latency-svc-s9nkv Mar 12 00:28:35.143: INFO: Got endpoints: latency-svc-75zsw [497.323558ms] Mar 12 00:28:35.149: INFO: Got endpoints: latency-svc-s9nkv [472.832479ms] Mar 12 00:28:35.174: INFO: Created: latency-svc-64zhd Mar 12 00:28:35.179: INFO: Got endpoints: latency-svc-64zhd [446.901784ms] Mar 12 00:28:35.200: INFO: Created: latency-svc-tqfmq Mar 12 00:28:35.213: INFO: Got endpoints: latency-svc-tqfmq [459.282038ms] Mar 12 00:28:35.253: INFO: Created: latency-svc-689tn Mar 12 00:28:35.281: INFO: Created: latency-svc-tr9pc Mar 12 00:28:35.282: INFO: Got endpoints: latency-svc-689tn [503.969897ms] Mar 12 00:28:35.287: INFO: Got endpoints: latency-svc-tr9pc [485.769119ms] Mar 12 00:28:35.311: INFO: Created: latency-svc-jlz74 Mar 12 00:28:35.330: INFO: Got endpoints: latency-svc-jlz74 [465.446099ms] Mar 12 00:28:35.351: INFO: Created: latency-svc-6nvzt Mar 12 00:28:35.379: INFO: Got endpoints: latency-svc-6nvzt [493.547854ms] Mar 12 00:28:35.392: INFO: Created: latency-svc-cr48v Mar 12 00:28:35.401: INFO: Got endpoints: latency-svc-cr48v [497.195953ms] Mar 12 00:28:35.423: INFO: Created: latency-svc-nksjg Mar 12 00:28:35.437: INFO: Got endpoints: latency-svc-nksjg [509.14874ms] Mar 12 00:28:35.468: INFO: Created: latency-svc-m577w Mar 12 00:28:35.473: INFO: Got endpoints: latency-svc-m577w [520.813184ms] Mar 12 00:28:35.529: INFO: Created: latency-svc-6p9zm Mar 12 00:28:35.549: INFO: Created: latency-svc-5z2jj Mar 12 00:28:35.549: INFO: Got endpoints: latency-svc-6p9zm [540.125162ms] Mar 12 00:28:35.557: INFO: Got endpoints: latency-svc-5z2jj [539.457436ms] Mar 12 00:28:35.579: INFO: Created: latency-svc-jjv6f Mar 12 00:28:35.587: INFO: Got endpoints: latency-svc-jjv6f [539.193039ms] Mar 12 00:28:35.606: INFO: Created: latency-svc-fct55 Mar 12 00:28:35.610: INFO: Got endpoints: latency-svc-fct55 [532.864679ms] Mar 12 00:28:35.648: INFO: Created: latency-svc-gb78c Mar 12 00:28:35.672: INFO: Created: latency-svc-hr8pw Mar 12 00:28:35.672: INFO: Got endpoints: latency-svc-gb78c [528.405179ms] Mar 12 00:28:35.676: INFO: Got endpoints: latency-svc-hr8pw [527.187116ms] Mar 12 00:28:35.699: INFO: Created: latency-svc-plzfj Mar 12 00:28:35.706: INFO: Got endpoints: latency-svc-plzfj [527.474726ms] Mar 12 00:28:35.723: INFO: Created: latency-svc-btzlr Mar 12 00:28:35.741: INFO: Got endpoints: latency-svc-btzlr [528.436915ms] Mar 12 00:28:35.774: INFO: Created: latency-svc-jpptj Mar 12 00:28:35.804: INFO: Got endpoints: latency-svc-jpptj [521.956907ms] Mar 12 00:28:35.804: INFO: Created: latency-svc-l9lwj Mar 12 00:28:35.808: INFO: Got endpoints: latency-svc-l9lwj [521.148199ms] Mar 12 00:28:35.834: INFO: Created: latency-svc-b89kb Mar 12 00:28:35.838: INFO: Got endpoints: latency-svc-b89kb [508.701362ms] Mar 12 00:28:35.855: INFO: Created: latency-svc-f2c7d Mar 12 00:28:35.862: INFO: Got endpoints: latency-svc-f2c7d [483.378148ms] Mar 12 00:28:35.906: INFO: Created: latency-svc-7nz55 Mar 12 00:28:35.921: INFO: Got endpoints: latency-svc-7nz55 [520.430476ms] Mar 12 00:28:35.922: INFO: Created: latency-svc-zb6bn Mar 12 00:28:35.928: INFO: Got endpoints: latency-svc-zb6bn [491.118322ms] Mar 12 00:28:35.960: INFO: Created: latency-svc-tsfpr Mar 12 00:28:35.964: INFO: Got endpoints: latency-svc-tsfpr [491.666353ms] Mar 12 00:28:35.978: INFO: Created: latency-svc-c6fck Mar 12 00:28:35.982: INFO: Got endpoints: latency-svc-c6fck [433.18569ms] Mar 12 00:28:35.999: INFO: Created: latency-svc-s2gsl Mar 12 00:28:36.031: INFO: Got endpoints: latency-svc-s2gsl [474.523308ms] Mar 12 00:28:36.047: INFO: Created: latency-svc-ccq5b Mar 12 00:28:36.054: INFO: Got endpoints: latency-svc-ccq5b [467.150797ms] Mar 12 00:28:36.071: INFO: Created: latency-svc-sn2tx Mar 12 00:28:36.092: INFO: Created: latency-svc-xldqx Mar 12 00:28:36.093: INFO: Got endpoints: latency-svc-sn2tx [482.776954ms] Mar 12 00:28:36.102: INFO: Got endpoints: latency-svc-xldqx [429.912216ms] Mar 12 00:28:36.122: INFO: Created: latency-svc-ndfk2 Mar 12 00:28:36.151: INFO: Got endpoints: latency-svc-ndfk2 [475.053065ms] Mar 12 00:28:36.191: INFO: Created: latency-svc-f8rww Mar 12 00:28:36.198: INFO: Got endpoints: latency-svc-f8rww [491.956528ms] Mar 12 00:28:36.221: INFO: Created: latency-svc-d257h Mar 12 00:28:36.228: INFO: Got endpoints: latency-svc-d257h [486.946124ms] Mar 12 00:28:36.248: INFO: Created: latency-svc-pzdh7 Mar 12 00:28:36.271: INFO: Got endpoints: latency-svc-pzdh7 [467.870133ms] Mar 12 00:28:36.284: INFO: Created: latency-svc-mmn24 Mar 12 00:28:36.288: INFO: Got endpoints: latency-svc-mmn24 [479.429062ms] Mar 12 00:28:36.308: INFO: Created: latency-svc-jmwwb Mar 12 00:28:36.330: INFO: Got endpoints: latency-svc-jmwwb [491.915011ms] Mar 12 00:28:36.359: INFO: Created: latency-svc-rcc8w Mar 12 00:28:36.366: INFO: Got endpoints: latency-svc-rcc8w [503.457492ms] Mar 12 00:28:36.415: INFO: Created: latency-svc-bxdm2 Mar 12 00:28:36.434: INFO: Got endpoints: latency-svc-bxdm2 [512.529662ms] Mar 12 00:28:36.434: INFO: Created: latency-svc-8zrvm Mar 12 00:28:36.444: INFO: Got endpoints: latency-svc-8zrvm [515.315358ms] Mar 12 00:28:36.470: INFO: Created: latency-svc-k9nn5 Mar 12 00:28:36.474: INFO: Got endpoints: latency-svc-k9nn5 [509.290444ms] Mar 12 00:28:36.491: INFO: Created: latency-svc-t9gkf Mar 12 00:28:36.510: INFO: Got endpoints: latency-svc-t9gkf [527.297028ms] Mar 12 00:28:36.551: INFO: Created: latency-svc-rljdc Mar 12 00:28:36.563: INFO: Got endpoints: latency-svc-rljdc [531.876117ms] Mar 12 00:28:36.602: INFO: Created: latency-svc-kpctr Mar 12 00:28:36.612: INFO: Got endpoints: latency-svc-kpctr [557.534461ms] Mar 12 00:28:36.672: INFO: Created: latency-svc-nl2bf Mar 12 00:28:36.689: INFO: Created: latency-svc-69rvk Mar 12 00:28:36.690: INFO: Got endpoints: latency-svc-nl2bf [596.716525ms] Mar 12 00:28:36.695: INFO: Got endpoints: latency-svc-69rvk [593.165081ms] Mar 12 00:28:36.713: INFO: Created: latency-svc-tdkw6 Mar 12 00:28:36.719: INFO: Got endpoints: latency-svc-tdkw6 [567.881326ms] Mar 12 00:28:36.740: INFO: Created: latency-svc-n6rkn Mar 12 00:28:36.764: INFO: Got endpoints: latency-svc-n6rkn [565.804784ms] Mar 12 00:28:36.804: INFO: Created: latency-svc-nfrls Mar 12 00:28:36.809: INFO: Got endpoints: latency-svc-nfrls [581.028552ms] Mar 12 00:28:36.827: INFO: Created: latency-svc-p62gl Mar 12 00:28:36.833: INFO: Got endpoints: latency-svc-p62gl [561.834595ms] Mar 12 00:28:36.851: INFO: Created: latency-svc-ph2cf Mar 12 00:28:36.857: INFO: Got endpoints: latency-svc-ph2cf [569.159039ms] Mar 12 00:28:36.875: INFO: Created: latency-svc-2hgrt Mar 12 00:28:36.881: INFO: Got endpoints: latency-svc-2hgrt [550.69038ms] Mar 12 00:28:36.896: INFO: Created: latency-svc-jsp59 Mar 12 00:28:36.899: INFO: Got endpoints: latency-svc-jsp59 [533.240268ms] Mar 12 00:28:36.936: INFO: Created: latency-svc-v9fwt Mar 12 00:28:36.950: INFO: Got endpoints: latency-svc-v9fwt [516.199591ms] Mar 12 00:28:36.966: INFO: Created: latency-svc-bfk7x Mar 12 00:28:36.971: INFO: Got endpoints: latency-svc-bfk7x [527.68996ms] Mar 12 00:28:36.990: INFO: Created: latency-svc-rlgj9 Mar 12 00:28:36.995: INFO: Got endpoints: latency-svc-rlgj9 [521.186332ms] Mar 12 00:28:37.014: INFO: Created: latency-svc-498md Mar 12 00:28:37.034: INFO: Got endpoints: latency-svc-498md [524.844059ms] Mar 12 00:28:37.073: INFO: Created: latency-svc-92kbp Mar 12 00:28:37.078: INFO: Got endpoints: latency-svc-92kbp [515.042167ms] Mar 12 00:28:37.098: INFO: Created: latency-svc-mblpn Mar 12 00:28:37.102: INFO: Got endpoints: latency-svc-mblpn [490.802491ms] Mar 12 00:28:37.115: INFO: Created: latency-svc-92sk6 Mar 12 00:28:37.121: INFO: Got endpoints: latency-svc-92sk6 [430.696596ms] Mar 12 00:28:37.134: INFO: Created: latency-svc-sf8b8 Mar 12 00:28:37.139: INFO: Got endpoints: latency-svc-sf8b8 [443.730491ms] Mar 12 00:28:37.154: INFO: Created: latency-svc-bxd59 Mar 12 00:28:37.163: INFO: Got endpoints: latency-svc-bxd59 [443.448185ms] Mar 12 00:28:37.199: INFO: Created: latency-svc-khpkr Mar 12 00:28:37.206: INFO: Got endpoints: latency-svc-khpkr [441.723291ms] Mar 12 00:28:37.226: INFO: Created: latency-svc-lfgs4 Mar 12 00:28:37.267: INFO: Created: latency-svc-w52d7 Mar 12 00:28:37.267: INFO: Got endpoints: latency-svc-lfgs4 [457.814095ms] Mar 12 00:28:37.271: INFO: Got endpoints: latency-svc-w52d7 [437.734514ms] Mar 12 00:28:37.331: INFO: Created: latency-svc-njv7v Mar 12 00:28:37.350: INFO: Created: latency-svc-6w46j Mar 12 00:28:37.350: INFO: Got endpoints: latency-svc-njv7v [492.832886ms] Mar 12 00:28:37.374: INFO: Got endpoints: latency-svc-6w46j [492.813384ms] Mar 12 00:28:37.428: INFO: Created: latency-svc-nwt67 Mar 12 00:28:37.463: INFO: Got endpoints: latency-svc-nwt67 [563.855229ms] Mar 12 00:28:37.491: INFO: Created: latency-svc-7gmxt Mar 12 00:28:37.509: INFO: Got endpoints: latency-svc-7gmxt [558.980681ms] Mar 12 00:28:37.510: INFO: Created: latency-svc-x44nk Mar 12 00:28:37.530: INFO: Got endpoints: latency-svc-x44nk [558.344108ms] Mar 12 00:28:37.548: INFO: Created: latency-svc-9wmnj Mar 12 00:28:37.552: INFO: Got endpoints: latency-svc-9wmnj [556.6713ms] Mar 12 00:28:37.584: INFO: Created: latency-svc-j2kqc Mar 12 00:28:37.599: INFO: Got endpoints: latency-svc-j2kqc [564.842579ms] Mar 12 00:28:37.618: INFO: Created: latency-svc-zkx8f Mar 12 00:28:37.624: INFO: Got endpoints: latency-svc-zkx8f [545.736901ms] Mar 12 00:28:37.641: INFO: Created: latency-svc-5v8xg Mar 12 00:28:37.648: INFO: Got endpoints: latency-svc-5v8xg [545.564234ms] Mar 12 00:28:37.668: INFO: Created: latency-svc-whpjj Mar 12 00:28:37.672: INFO: Got endpoints: latency-svc-whpjj [551.27056ms] Mar 12 00:28:37.704: INFO: Created: latency-svc-npr5w Mar 12 00:28:37.720: INFO: Got endpoints: latency-svc-npr5w [581.219917ms] Mar 12 00:28:37.737: INFO: Created: latency-svc-zzkjp Mar 12 00:28:37.761: INFO: Created: latency-svc-6brv9 Mar 12 00:28:37.761: INFO: Got endpoints: latency-svc-zzkjp [598.761014ms] Mar 12 00:28:37.834: INFO: Created: latency-svc-ldlgd Mar 12 00:28:37.834: INFO: Got endpoints: latency-svc-6brv9 [627.972293ms] Mar 12 00:28:37.869: INFO: Got endpoints: latency-svc-ldlgd [602.264ms] Mar 12 00:28:37.870: INFO: Created: latency-svc-fjf4r Mar 12 00:28:37.894: INFO: Created: latency-svc-bmsd9 Mar 12 00:28:37.905: INFO: Created: latency-svc-w979l Mar 12 00:28:37.907: INFO: Got endpoints: latency-svc-fjf4r [635.348942ms] Mar 12 00:28:37.972: INFO: Got endpoints: latency-svc-bmsd9 [621.981241ms] Mar 12 00:28:37.972: INFO: Created: latency-svc-v245n Mar 12 00:28:38.007: INFO: Created: latency-svc-ksxr9 Mar 12 00:28:38.007: INFO: Got endpoints: latency-svc-w979l [633.591842ms] Mar 12 00:28:38.045: INFO: Created: latency-svc-w8hk5 Mar 12 00:28:38.056: INFO: Got endpoints: latency-svc-v245n [592.655301ms] Mar 12 00:28:38.092: INFO: Created: latency-svc-vcmnk Mar 12 00:28:38.113: INFO: Created: latency-svc-4rkd6 Mar 12 00:28:38.113: INFO: Got endpoints: latency-svc-ksxr9 [603.730146ms] Mar 12 00:28:38.131: INFO: Created: latency-svc-h4svr Mar 12 00:28:38.156: INFO: Created: latency-svc-vfq4f Mar 12 00:28:38.164: INFO: Got endpoints: latency-svc-w8hk5 [633.846572ms] Mar 12 00:28:38.182: INFO: Created: latency-svc-n6bfx Mar 12 00:28:38.223: INFO: Got endpoints: latency-svc-vcmnk [671.372133ms] Mar 12 00:28:38.223: INFO: Created: latency-svc-cgsqs Mar 12 00:28:38.250: INFO: Created: latency-svc-9nmmg Mar 12 00:28:38.260: INFO: Got endpoints: latency-svc-4rkd6 [661.051332ms] Mar 12 00:28:38.281: INFO: Created: latency-svc-dpg5z Mar 12 00:28:38.319: INFO: Created: latency-svc-vz667 Mar 12 00:28:38.319: INFO: Got endpoints: latency-svc-h4svr [695.153343ms] Mar 12 00:28:38.368: INFO: Created: latency-svc-xrw7g Mar 12 00:28:38.368: INFO: Got endpoints: latency-svc-vfq4f [719.933629ms] Mar 12 00:28:38.392: INFO: Created: latency-svc-ghnm7 Mar 12 00:28:38.404: INFO: Got endpoints: latency-svc-n6bfx [731.599776ms] Mar 12 00:28:38.437: INFO: Created: latency-svc-pf8pq Mar 12 00:28:38.475: INFO: Got endpoints: latency-svc-cgsqs [754.85358ms] Mar 12 00:28:38.503: INFO: Created: latency-svc-xbt94 Mar 12 00:28:38.511: INFO: Got endpoints: latency-svc-9nmmg [749.174732ms] Mar 12 00:28:38.537: INFO: Created: latency-svc-wpzkk Mar 12 00:28:38.559: INFO: Got endpoints: latency-svc-dpg5z [725.222422ms] Mar 12 00:28:38.560: INFO: Created: latency-svc-sl58r Mar 12 00:28:38.635: INFO: Created: latency-svc-p2fl6 Mar 12 00:28:38.635: INFO: Got endpoints: latency-svc-vz667 [765.267441ms] Mar 12 00:28:38.659: INFO: Got endpoints: latency-svc-xrw7g [752.374648ms] Mar 12 00:28:38.660: INFO: Created: latency-svc-5q8t2 Mar 12 00:28:38.677: INFO: Created: latency-svc-8hbhx Mar 12 00:28:38.733: INFO: Created: latency-svc-tjszm Mar 12 00:28:38.733: INFO: Got endpoints: latency-svc-ghnm7 [761.126572ms] Mar 12 00:28:38.754: INFO: Created: latency-svc-8w4ct Mar 12 00:28:38.757: INFO: Got endpoints: latency-svc-pf8pq [749.216339ms] Mar 12 00:28:38.785: INFO: Created: latency-svc-zw2wt Mar 12 00:28:38.809: INFO: Created: latency-svc-g2pq2 Mar 12 00:28:38.809: INFO: Got endpoints: latency-svc-xbt94 [752.823328ms] Mar 12 00:28:38.852: INFO: Created: latency-svc-pvzd9 Mar 12 00:28:38.853: INFO: Got endpoints: latency-svc-wpzkk [739.991592ms] Mar 12 00:28:38.881: INFO: Created: latency-svc-txtxn Mar 12 00:28:38.914: INFO: Got endpoints: latency-svc-sl58r [750.030087ms] Mar 12 00:28:38.914: INFO: Created: latency-svc-sdw4j Mar 12 00:28:38.944: INFO: Created: latency-svc-mg6pw Mar 12 00:28:39.002: INFO: Got endpoints: latency-svc-p2fl6 [778.754806ms] Mar 12 00:28:39.002: INFO: Created: latency-svc-dzztw Mar 12 00:28:39.019: INFO: Got endpoints: latency-svc-5q8t2 [758.779564ms] Mar 12 00:28:39.037: INFO: Created: latency-svc-26xz2 Mar 12 00:28:39.055: INFO: Created: latency-svc-zmzrp Mar 12 00:28:39.055: INFO: Got endpoints: latency-svc-8hbhx [735.981608ms] Mar 12 00:28:39.073: INFO: Created: latency-svc-2tvkj Mar 12 00:28:39.091: INFO: Created: latency-svc-87d2p Mar 12 00:28:39.127: INFO: Got endpoints: latency-svc-tjszm [759.355261ms] Mar 12 00:28:39.127: INFO: Created: latency-svc-fq9wb Mar 12 00:28:39.166: INFO: Got endpoints: latency-svc-8w4ct [761.966973ms] Mar 12 00:28:39.166: INFO: Created: latency-svc-9qpn5 Mar 12 00:28:39.187: INFO: Created: latency-svc-xhk8s Mar 12 00:28:39.211: INFO: Created: latency-svc-w6nts Mar 12 00:28:39.211: INFO: Got endpoints: latency-svc-zw2wt [735.833948ms] Mar 12 00:28:39.260: INFO: Created: latency-svc-hj7g7 Mar 12 00:28:39.262: INFO: Got endpoints: latency-svc-g2pq2 [751.503661ms] Mar 12 00:28:39.286: INFO: Created: latency-svc-lqdkc Mar 12 00:28:39.303: INFO: Got endpoints: latency-svc-pvzd9 [743.288619ms] Mar 12 00:28:39.334: INFO: Created: latency-svc-8p9ls Mar 12 00:28:39.367: INFO: Got endpoints: latency-svc-txtxn [732.04094ms] Mar 12 00:28:39.391: INFO: Created: latency-svc-pzj5m Mar 12 00:28:39.403: INFO: Got endpoints: latency-svc-sdw4j [743.88405ms] Mar 12 00:28:39.421: INFO: Created: latency-svc-smqcg Mar 12 00:28:39.453: INFO: Got endpoints: latency-svc-mg6pw [719.85307ms] Mar 12 00:28:39.492: INFO: Created: latency-svc-7f2tq Mar 12 00:28:39.508: INFO: Got endpoints: latency-svc-dzztw [750.852949ms] Mar 12 00:28:39.532: INFO: Created: latency-svc-r4rvj Mar 12 00:28:39.553: INFO: Got endpoints: latency-svc-26xz2 [744.244524ms] Mar 12 00:28:39.571: INFO: Created: latency-svc-ffpl6 Mar 12 00:28:39.612: INFO: Got endpoints: latency-svc-zmzrp [759.215254ms] Mar 12 00:28:39.632: INFO: Created: latency-svc-fczxw Mar 12 00:28:39.653: INFO: Got endpoints: latency-svc-2tvkj [739.125894ms] Mar 12 00:28:39.675: INFO: Created: latency-svc-q875w Mar 12 00:28:39.703: INFO: Got endpoints: latency-svc-87d2p [700.986401ms] Mar 12 00:28:39.745: INFO: Created: latency-svc-9j9lk Mar 12 00:28:39.772: INFO: Got endpoints: latency-svc-fq9wb [752.498231ms] Mar 12 00:28:39.800: INFO: Created: latency-svc-fw5ls Mar 12 00:28:39.803: INFO: Got endpoints: latency-svc-9qpn5 [747.047847ms] Mar 12 00:28:39.823: INFO: Created: latency-svc-98clk Mar 12 00:28:39.876: INFO: Got endpoints: latency-svc-xhk8s [748.335959ms] Mar 12 00:28:39.897: INFO: Created: latency-svc-nnsbs Mar 12 00:28:39.902: INFO: Got endpoints: latency-svc-w6nts [736.922658ms] Mar 12 00:28:39.927: INFO: Created: latency-svc-tdf2m Mar 12 00:28:39.953: INFO: Got endpoints: latency-svc-hj7g7 [742.173276ms] Mar 12 00:28:39.973: INFO: Created: latency-svc-k86qq Mar 12 00:28:40.003: INFO: Got endpoints: latency-svc-lqdkc [740.769883ms] Mar 12 00:28:40.027: INFO: Created: latency-svc-6hkc7 Mar 12 00:28:40.053: INFO: Got endpoints: latency-svc-8p9ls [749.985371ms] Mar 12 00:28:40.110: INFO: Got endpoints: latency-svc-pzj5m [742.928578ms] Mar 12 00:28:40.110: INFO: Created: latency-svc-vrr9f Mar 12 00:28:40.131: INFO: Created: latency-svc-mxxbw Mar 12 00:28:40.153: INFO: Got endpoints: latency-svc-smqcg [750.11975ms] Mar 12 00:28:40.179: INFO: Created: latency-svc-p9mdk Mar 12 00:28:40.203: INFO: Got endpoints: latency-svc-7f2tq [750.496405ms] Mar 12 00:28:40.273: INFO: Got endpoints: latency-svc-r4rvj [765.336479ms] Mar 12 00:28:40.274: INFO: Created: latency-svc-xbsdp Mar 12 00:28:40.300: INFO: Created: latency-svc-zsvmr Mar 12 00:28:40.309: INFO: Got endpoints: latency-svc-ffpl6 [755.937097ms] Mar 12 00:28:40.355: INFO: Created: latency-svc-rgzx4 Mar 12 00:28:40.355: INFO: Got endpoints: latency-svc-fczxw [742.3904ms] Mar 12 00:28:40.405: INFO: Created: latency-svc-wzl8c Mar 12 00:28:40.405: INFO: Got endpoints: latency-svc-q875w [751.946017ms] Mar 12 00:28:40.453: INFO: Created: latency-svc-cq8gr Mar 12 00:28:40.475: INFO: Got endpoints: latency-svc-9j9lk [771.880737ms] Mar 12 00:28:40.522: INFO: Got endpoints: latency-svc-fw5ls [749.964056ms] Mar 12 00:28:40.522: INFO: Created: latency-svc-vmzft Mar 12 00:28:40.558: INFO: Created: latency-svc-kvfcb Mar 12 00:28:40.558: INFO: Got endpoints: latency-svc-98clk [755.588624ms] Mar 12 00:28:40.594: INFO: Created: latency-svc-42z2k Mar 12 00:28:40.603: INFO: Got endpoints: latency-svc-nnsbs [726.776977ms] Mar 12 00:28:40.627: INFO: Created: latency-svc-75s7b Mar 12 00:28:40.653: INFO: Got endpoints: latency-svc-tdf2m [750.266952ms] Mar 12 00:28:40.675: INFO: Created: latency-svc-ldbgs Mar 12 00:28:40.708: INFO: Got endpoints: latency-svc-k86qq [754.875705ms] Mar 12 00:28:40.733: INFO: Created: latency-svc-q7qsn Mar 12 00:28:40.753: INFO: Got endpoints: latency-svc-6hkc7 [749.726332ms] Mar 12 00:28:40.774: INFO: Created: latency-svc-48mmd Mar 12 00:28:40.803: INFO: Got endpoints: latency-svc-vrr9f [750.2266ms] Mar 12 00:28:40.843: INFO: Created: latency-svc-z72gk Mar 12 00:28:40.861: INFO: Got endpoints: latency-svc-mxxbw [751.19291ms] Mar 12 00:28:40.888: INFO: Created: latency-svc-f5bqj Mar 12 00:28:40.903: INFO: Got endpoints: latency-svc-p9mdk [749.862816ms] Mar 12 00:28:40.930: INFO: Created: latency-svc-ldl5d Mar 12 00:28:40.966: INFO: Got endpoints: latency-svc-xbsdp [762.12738ms] Mar 12 00:28:41.017: INFO: Got endpoints: latency-svc-zsvmr [744.207688ms] Mar 12 00:28:41.029: INFO: Created: latency-svc-s7jq9 Mar 12 00:28:41.047: INFO: Created: latency-svc-5gt9l Mar 12 00:28:41.053: INFO: Got endpoints: latency-svc-rgzx4 [744.134743ms] Mar 12 00:28:41.085: INFO: Created: latency-svc-sl7fm Mar 12 00:28:41.104: INFO: Got endpoints: latency-svc-wzl8c [749.35875ms] Mar 12 00:28:41.140: INFO: Created: latency-svc-x5xnq Mar 12 00:28:41.153: INFO: Got endpoints: latency-svc-cq8gr [748.402902ms] Mar 12 00:28:41.206: INFO: Created: latency-svc-l6pkl Mar 12 00:28:41.208: INFO: Got endpoints: latency-svc-vmzft [732.972504ms] Mar 12 00:28:41.227: INFO: Created: latency-svc-8z2ls Mar 12 00:28:41.253: INFO: Got endpoints: latency-svc-kvfcb [731.219867ms] Mar 12 00:28:41.277: INFO: Created: latency-svc-rll8f Mar 12 00:28:41.303: INFO: Got endpoints: latency-svc-42z2k [744.740832ms] Mar 12 00:28:41.371: INFO: Created: latency-svc-5rp7x Mar 12 00:28:41.371: INFO: Got endpoints: latency-svc-75s7b [768.487353ms] Mar 12 00:28:41.395: INFO: Created: latency-svc-pzz9x Mar 12 00:28:41.403: INFO: Got endpoints: latency-svc-ldbgs [749.811305ms] Mar 12 00:28:41.431: INFO: Created: latency-svc-2gpdc Mar 12 00:28:41.476: INFO: Got endpoints: latency-svc-q7qsn [767.803543ms] Mar 12 00:28:41.500: INFO: Created: latency-svc-rtzvt Mar 12 00:28:41.503: INFO: Got endpoints: latency-svc-48mmd [750.007109ms] Mar 12 00:28:41.524: INFO: Created: latency-svc-6bn97 Mar 12 00:28:41.582: INFO: Got endpoints: latency-svc-z72gk [779.57266ms] Mar 12 00:28:41.611: INFO: Got endpoints: latency-svc-f5bqj [750.275643ms] Mar 12 00:28:41.611: INFO: Created: latency-svc-8mkz8 Mar 12 00:28:41.629: INFO: Created: latency-svc-4b79t Mar 12 00:28:41.653: INFO: Got endpoints: latency-svc-ldl5d [749.929505ms] Mar 12 00:28:41.674: INFO: Created: latency-svc-s9lwb Mar 12 00:28:41.703: INFO: Got endpoints: latency-svc-s7jq9 [737.228354ms] Mar 12 00:28:41.728: INFO: Created: latency-svc-7wtgz Mar 12 00:28:41.753: INFO: Got endpoints: latency-svc-5gt9l [735.781805ms] Mar 12 00:28:41.785: INFO: Created: latency-svc-j458q Mar 12 00:28:41.822: INFO: Got endpoints: latency-svc-sl7fm [769.352574ms] Mar 12 00:28:41.845: INFO: Created: latency-svc-h7x7q Mar 12 00:28:41.857: INFO: Got endpoints: latency-svc-x5xnq [752.709313ms] Mar 12 00:28:41.883: INFO: Created: latency-svc-n5j95 Mar 12 00:28:41.903: INFO: Got endpoints: latency-svc-l6pkl [749.695608ms] Mar 12 00:28:41.936: INFO: Created: latency-svc-5l7sv Mar 12 00:28:41.953: INFO: Got endpoints: latency-svc-8z2ls [745.11654ms] Mar 12 00:28:41.974: INFO: Created: latency-svc-nwvg9 Mar 12 00:28:42.003: INFO: Got endpoints: latency-svc-rll8f [749.970917ms] Mar 12 00:28:42.055: INFO: Created: latency-svc-8kzr6 Mar 12 00:28:42.055: INFO: Got endpoints: latency-svc-5rp7x [752.425653ms] Mar 12 00:28:42.085: INFO: Created: latency-svc-8lnkx Mar 12 00:28:42.103: INFO: Got endpoints: latency-svc-pzz9x [731.940282ms] Mar 12 00:28:42.123: INFO: Created: latency-svc-wd76k Mar 12 00:28:42.153: INFO: Got endpoints: latency-svc-2gpdc [750.227606ms] Mar 12 00:28:42.196: INFO: Created: latency-svc-vhv2p Mar 12 00:28:42.203: INFO: Got endpoints: latency-svc-rtzvt [726.791315ms] Mar 12 00:28:42.229: INFO: Created: latency-svc-jjlmq Mar 12 00:28:42.253: INFO: Got endpoints: latency-svc-6bn97 [750.091649ms] Mar 12 00:28:42.316: INFO: Created: latency-svc-wq767 Mar 12 00:28:42.316: INFO: Got endpoints: latency-svc-8mkz8 [733.282757ms] Mar 12 00:28:42.340: INFO: Created: latency-svc-r8s7j Mar 12 00:28:42.361: INFO: Got endpoints: latency-svc-4b79t [749.479107ms] Mar 12 00:28:42.391: INFO: Created: latency-svc-6h8qx Mar 12 00:28:42.427: INFO: Got endpoints: latency-svc-s9lwb [773.867468ms] Mar 12 00:28:42.453: INFO: Got endpoints: latency-svc-7wtgz [749.970693ms] Mar 12 00:28:42.504: INFO: Got endpoints: latency-svc-j458q [750.583802ms] Mar 12 00:28:42.571: INFO: Got endpoints: latency-svc-h7x7q [748.174009ms] Mar 12 00:28:42.603: INFO: Got endpoints: latency-svc-n5j95 [745.993715ms] Mar 12 00:28:42.653: INFO: Got endpoints: latency-svc-5l7sv [750.130702ms] Mar 12 00:28:42.703: INFO: Got endpoints: latency-svc-nwvg9 [749.752559ms] Mar 12 00:28:42.753: INFO: Got endpoints: latency-svc-8kzr6 [749.936977ms] Mar 12 00:28:42.803: INFO: Got endpoints: latency-svc-8lnkx [747.47235ms] Mar 12 00:28:42.853: INFO: Got endpoints: latency-svc-wd76k [749.564237ms] Mar 12 00:28:42.912: INFO: Got endpoints: latency-svc-vhv2p [758.805303ms] Mar 12 00:28:42.953: INFO: Got endpoints: latency-svc-jjlmq [750.348637ms] Mar 12 00:28:43.003: INFO: Got endpoints: latency-svc-wq767 [749.721928ms] Mar 12 00:28:43.053: INFO: Got endpoints: latency-svc-r8s7j [736.915306ms] Mar 12 00:28:43.103: INFO: Got endpoints: latency-svc-6h8qx [742.021111ms] Mar 12 00:28:43.103: INFO: Latencies: [85.809636ms 118.72802ms 148.758852ms 204.671864ms 225.937665ms 250.395598ms 273.799791ms 336.608716ms 357.762293ms 376.187392ms 400.60232ms 424.596101ms 429.912216ms 430.696596ms 433.18569ms 437.734514ms 441.723291ms 443.448185ms 443.730491ms 446.901784ms 457.814095ms 459.282038ms 464.392903ms 465.446099ms 467.150797ms 467.870133ms 472.832479ms 474.523308ms 475.053065ms 479.429062ms 481.414084ms 482.776954ms 483.378148ms 485.769119ms 486.946124ms 490.194823ms 490.802491ms 491.118322ms 491.666353ms 491.915011ms 491.956528ms 492.813384ms 492.832886ms 493.547854ms 497.195953ms 497.323558ms 503.457492ms 503.969897ms 508.701362ms 509.14874ms 509.290444ms 512.529662ms 515.042167ms 515.315358ms 516.199591ms 520.430476ms 520.469718ms 520.813184ms 521.148199ms 521.186332ms 521.956907ms 524.844059ms 527.187116ms 527.297028ms 527.474726ms 527.68996ms 528.405179ms 528.436915ms 531.876117ms 532.864679ms 533.240268ms 539.193039ms 539.457436ms 540.125162ms 545.564234ms 545.736901ms 550.69038ms 551.27056ms 556.6713ms 557.534461ms 558.344108ms 558.980681ms 561.834595ms 563.855229ms 564.842579ms 565.804784ms 567.881326ms 569.159039ms 581.028552ms 581.219917ms 592.655301ms 593.165081ms 596.716525ms 598.761014ms 602.264ms 603.730146ms 621.981241ms 627.972293ms 633.591842ms 633.846572ms 635.348942ms 661.051332ms 671.372133ms 695.153343ms 700.986401ms 719.85307ms 719.933629ms 725.222422ms 726.776977ms 726.791315ms 731.219867ms 731.599776ms 731.940282ms 732.04094ms 732.972504ms 733.282757ms 735.781805ms 735.833948ms 735.981608ms 736.915306ms 736.922658ms 737.228354ms 739.125894ms 739.991592ms 740.769883ms 742.021111ms 742.173276ms 742.3904ms 742.928578ms 743.288619ms 743.88405ms 744.134743ms 744.207688ms 744.244524ms 744.740832ms 745.11654ms 745.993715ms 747.047847ms 747.47235ms 748.174009ms 748.335959ms 748.402902ms 749.174732ms 749.216339ms 749.35875ms 749.479107ms 749.564237ms 749.695608ms 749.721928ms 749.726332ms 749.752559ms 749.811305ms 749.862816ms 749.929505ms 749.936977ms 749.964056ms 749.970693ms 749.970917ms 749.985371ms 750.007109ms 750.030087ms 750.091649ms 750.11975ms 750.130702ms 750.2266ms 750.227606ms 750.266952ms 750.275643ms 750.348637ms 750.496405ms 750.583802ms 750.852949ms 751.19291ms 751.503661ms 751.946017ms 752.374648ms 752.425653ms 752.498231ms 752.709313ms 752.823328ms 754.85358ms 754.875705ms 755.588624ms 755.937097ms 758.779564ms 758.805303ms 759.215254ms 759.355261ms 761.126572ms 761.966973ms 762.12738ms 765.267441ms 765.336479ms 767.803543ms 768.487353ms 769.352574ms 771.880737ms 773.867468ms 778.754806ms 779.57266ms] Mar 12 00:28:43.103: INFO: 50 %ile: 635.348942ms Mar 12 00:28:43.103: INFO: 90 %ile: 754.85358ms Mar 12 00:28:43.103: INFO: 99 %ile: 778.754806ms Mar 12 00:28:43.103: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:28:43.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-742" for this suite. • [SLOW TEST:10.881 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":280,"completed":191,"skipped":2876,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:28:43.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 00:28:43.699: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:28:46.726: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:28:46.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7390-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:28:47.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5120" for this suite. STEP: Destroying namespace "webhook-5120-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":192,"skipped":2909,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:28:48.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 12 00:28:48.096: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 00:28:48.104: INFO: Waiting for terminating namespaces to be deleted... Mar 12 00:28:48.105: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 12 00:28:48.109: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 12 00:28:48.109: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 00:28:48.109: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 12 00:28:48.109: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 00:28:48.109: INFO: svc-latency-rc-4qxtr from svc-latency-742 started at 2020-03-12 00:28:32 +0000 UTC (1 container statuses recorded) Mar 12 00:28:48.109: INFO: Container svc-latency-rc ready: true, restart count 0 Mar 12 00:28:48.109: INFO: sample-webhook-deployment-5f65f8c764-s9h9r from webhook-5120 started at 2020-03-12 00:28:43 +0000 UTC (1 container statuses recorded) Mar 12 00:28:48.109: INFO: Container sample-webhook ready: true, restart count 0 Mar 12 00:28:48.109: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 12 00:28:48.119: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 12 00:28:48.119: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 00:28:48.119: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 12 00:28:48.119: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 00:28:48.119: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 12 00:28:48.119: INFO: Container coredns ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-feb82f46-cf45-4c01-b900-333cc9274dbe 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-feb82f46-cf45-4c01-b900-333cc9274dbe off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-feb82f46-cf45-4c01-b900-333cc9274dbe [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:28:56.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6742" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:8.382 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":193,"skipped":2941,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:28:56.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:28:56.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config version' Mar 12 00:28:56.674: INFO: stderr: "" Mar 12 00:28:56.674: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:28:56.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4910" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":280,"completed":194,"skipped":2944,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:28:56.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0312 00:28:57.644483 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 00:28:57.644: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:28:57.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7920" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":195,"skipped":2971,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:28:57.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:28:57.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3356" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":196,"skipped":3039,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:28:57.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-4693/configmap-test-71060d52-4efb-4e0d-9e7d-327c3a07d354 STEP: Creating a pod to test consume configMaps Mar 12 00:28:58.034: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a43a03f-ae4d-4d8d-8529-c69df97cbc88" in namespace "configmap-4693" to be "success or failure" Mar 12 00:28:58.040: INFO: Pod "pod-configmaps-1a43a03f-ae4d-4d8d-8529-c69df97cbc88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.316279ms Mar 12 00:29:00.052: INFO: Pod "pod-configmaps-1a43a03f-ae4d-4d8d-8529-c69df97cbc88": Phase="Running", Reason="", readiness=true. Elapsed: 2.018543096s Mar 12 00:29:02.063: INFO: Pod "pod-configmaps-1a43a03f-ae4d-4d8d-8529-c69df97cbc88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029613396s STEP: Saw pod success Mar 12 00:29:02.063: INFO: Pod "pod-configmaps-1a43a03f-ae4d-4d8d-8529-c69df97cbc88" satisfied condition "success or failure" Mar 12 00:29:02.066: INFO: Trying to get logs from node latest-worker pod pod-configmaps-1a43a03f-ae4d-4d8d-8529-c69df97cbc88 container env-test: STEP: delete the pod Mar 12 00:29:02.084: INFO: Waiting for pod pod-configmaps-1a43a03f-ae4d-4d8d-8529-c69df97cbc88 to disappear Mar 12 00:29:02.088: INFO: Pod pod-configmaps-1a43a03f-ae4d-4d8d-8529-c69df97cbc88 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:29:02.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4693" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":197,"skipped":3044,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:29:02.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the initial replication controller Mar 12 00:29:02.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8015' Mar 12 00:29:02.504: INFO: stderr: "" Mar 12 00:29:02.504: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 00:29:02.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8015' Mar 12 00:29:02.597: INFO: stderr: "" Mar 12 00:29:02.597: INFO: stdout: "update-demo-nautilus-6lxxf update-demo-nautilus-ftcsn " Mar 12 00:29:02.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6lxxf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8015' Mar 12 00:29:02.668: INFO: stderr: "" Mar 12 00:29:02.668: INFO: stdout: "" Mar 12 00:29:02.668: INFO: update-demo-nautilus-6lxxf is created but not running Mar 12 00:29:07.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8015' Mar 12 00:29:10.722: INFO: stderr: "" Mar 12 00:29:10.722: INFO: stdout: "update-demo-nautilus-6lxxf update-demo-nautilus-ftcsn " Mar 12 00:29:10.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6lxxf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8015' Mar 12 00:29:10.834: INFO: stderr: "" Mar 12 00:29:10.834: INFO: stdout: "true" Mar 12 00:29:10.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6lxxf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8015' Mar 12 00:29:10.917: INFO: stderr: "" Mar 12 00:29:10.917: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 00:29:10.917: INFO: validating pod update-demo-nautilus-6lxxf Mar 12 00:29:10.920: INFO: got data: { "image": "nautilus.jpg" } Mar 12 00:29:10.920: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 00:29:10.920: INFO: update-demo-nautilus-6lxxf is verified up and running Mar 12 00:29:10.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftcsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8015' Mar 12 00:29:10.988: INFO: stderr: "" Mar 12 00:29:10.988: INFO: stdout: "true" Mar 12 00:29:10.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftcsn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8015' Mar 12 00:29:11.047: INFO: stderr: "" Mar 12 00:29:11.048: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 00:29:11.048: INFO: validating pod update-demo-nautilus-ftcsn Mar 12 00:29:11.050: INFO: got data: { "image": "nautilus.jpg" } Mar 12 00:29:11.050: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 00:29:11.050: INFO: update-demo-nautilus-ftcsn is verified up and running STEP: rolling-update to new replication controller Mar 12 00:29:11.051: INFO: scanned /root for discovery docs: Mar 12 00:29:11.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8015' Mar 12 00:29:33.496: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 12 00:29:33.496: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 00:29:33.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8015' Mar 12 00:29:33.597: INFO: stderr: "" Mar 12 00:29:33.597: INFO: stdout: "update-demo-kitten-mrbcb update-demo-kitten-tlzxf " Mar 12 00:29:33.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-mrbcb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8015' Mar 12 00:29:33.674: INFO: stderr: "" Mar 12 00:29:33.674: INFO: stdout: "true" Mar 12 00:29:33.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-mrbcb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8015' Mar 12 00:29:33.755: INFO: stderr: "" Mar 12 00:29:33.755: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 12 00:29:33.755: INFO: validating pod update-demo-kitten-mrbcb Mar 12 00:29:33.759: INFO: got data: { "image": "kitten.jpg" } Mar 12 00:29:33.759: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 12 00:29:33.759: INFO: update-demo-kitten-mrbcb is verified up and running Mar 12 00:29:33.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-tlzxf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8015' Mar 12 00:29:33.821: INFO: stderr: "" Mar 12 00:29:33.821: INFO: stdout: "true" Mar 12 00:29:33.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-kitten-tlzxf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8015' Mar 12 00:29:33.885: INFO: stderr: "" Mar 12 00:29:33.885: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 12 00:29:33.885: INFO: validating pod update-demo-kitten-tlzxf Mar 12 00:29:33.888: INFO: got data: { "image": "kitten.jpg" } Mar 12 00:29:33.888: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 12 00:29:33.888: INFO: update-demo-kitten-tlzxf is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:29:33.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8015" for this suite. • [SLOW TEST:31.784 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":280,"completed":198,"skipped":3051,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:29:33.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test env composition Mar 12 00:29:33.978: INFO: Waiting up to 5m0s for pod "var-expansion-94e8b155-a3b6-4cf2-a7fd-feb3e6cace40" in namespace "var-expansion-8779" to be "success or failure" Mar 12 00:29:34.045: INFO: Pod "var-expansion-94e8b155-a3b6-4cf2-a7fd-feb3e6cace40": Phase="Pending", Reason="", readiness=false. Elapsed: 67.601614ms Mar 12 00:29:36.050: INFO: Pod "var-expansion-94e8b155-a3b6-4cf2-a7fd-feb3e6cace40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07203274s STEP: Saw pod success Mar 12 00:29:36.050: INFO: Pod "var-expansion-94e8b155-a3b6-4cf2-a7fd-feb3e6cace40" satisfied condition "success or failure" Mar 12 00:29:36.053: INFO: Trying to get logs from node latest-worker pod var-expansion-94e8b155-a3b6-4cf2-a7fd-feb3e6cace40 container dapi-container: STEP: delete the pod Mar 12 00:29:36.067: INFO: Waiting for pod var-expansion-94e8b155-a3b6-4cf2-a7fd-feb3e6cace40 to disappear Mar 12 00:29:36.071: INFO: Pod var-expansion-94e8b155-a3b6-4cf2-a7fd-feb3e6cace40 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:29:36.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8779" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":199,"skipped":3063,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:29:36.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:29:52.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9874" for this suite. • [SLOW TEST:16.172 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":200,"skipped":3079,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:29:52.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 12 00:29:52.345: INFO: Waiting up to 5m0s for pod "pod-4aef243f-d7dd-404a-8830-7cfdc2337747" in namespace "emptydir-8851" to be "success or failure" Mar 12 00:29:52.371: INFO: Pod "pod-4aef243f-d7dd-404a-8830-7cfdc2337747": Phase="Pending", Reason="", readiness=false. Elapsed: 25.956428ms Mar 12 00:29:54.375: INFO: Pod "pod-4aef243f-d7dd-404a-8830-7cfdc2337747": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029275424s Mar 12 00:29:56.378: INFO: Pod "pod-4aef243f-d7dd-404a-8830-7cfdc2337747": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032839068s STEP: Saw pod success Mar 12 00:29:56.378: INFO: Pod "pod-4aef243f-d7dd-404a-8830-7cfdc2337747" satisfied condition "success or failure" Mar 12 00:29:56.381: INFO: Trying to get logs from node latest-worker pod pod-4aef243f-d7dd-404a-8830-7cfdc2337747 container test-container: STEP: delete the pod Mar 12 00:29:56.409: INFO: Waiting for pod pod-4aef243f-d7dd-404a-8830-7cfdc2337747 to disappear Mar 12 00:29:56.414: INFO: Pod pod-4aef243f-d7dd-404a-8830-7cfdc2337747 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:29:56.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8851" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":201,"skipped":3088,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:29:56.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:29:56.463: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:29:57.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9789" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":280,"completed":202,"skipped":3109,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:29:57.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:29:57.621: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:30:01.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-122" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":280,"completed":203,"skipped":3111,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:30:01.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-25cdb19e-7013-4dbf-b645-5628218ecc1b STEP: Creating a pod to test consume configMaps Mar 12 00:30:01.816: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-adf2e4f4-836a-4e3a-bd2f-08ab2576a0df" in namespace "projected-4940" to be "success or failure" Mar 12 00:30:01.820: INFO: Pod "pod-projected-configmaps-adf2e4f4-836a-4e3a-bd2f-08ab2576a0df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.281212ms Mar 12 00:30:03.824: INFO: Pod "pod-projected-configmaps-adf2e4f4-836a-4e3a-bd2f-08ab2576a0df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007850164s STEP: Saw pod success Mar 12 00:30:03.824: INFO: Pod "pod-projected-configmaps-adf2e4f4-836a-4e3a-bd2f-08ab2576a0df" satisfied condition "success or failure" Mar 12 00:30:03.826: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-adf2e4f4-836a-4e3a-bd2f-08ab2576a0df container projected-configmap-volume-test: STEP: delete the pod Mar 12 00:30:03.847: INFO: Waiting for pod pod-projected-configmaps-adf2e4f4-836a-4e3a-bd2f-08ab2576a0df to disappear Mar 12 00:30:03.871: INFO: Pod pod-projected-configmaps-adf2e4f4-836a-4e3a-bd2f-08ab2576a0df no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:30:03.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4940" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":204,"skipped":3116,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:30:03.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 12 00:30:03.933: INFO: Waiting up to 5m0s for pod "downward-api-637176e3-9cef-4530-bebb-d9b78e412b30" in namespace "downward-api-6373" to be "success or failure" Mar 12 00:30:03.934: INFO: Pod "downward-api-637176e3-9cef-4530-bebb-d9b78e412b30": Phase="Pending", Reason="", readiness=false. Elapsed: 1.826034ms Mar 12 00:30:05.938: INFO: Pod "downward-api-637176e3-9cef-4530-bebb-d9b78e412b30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005693478s STEP: Saw pod success Mar 12 00:30:05.938: INFO: Pod "downward-api-637176e3-9cef-4530-bebb-d9b78e412b30" satisfied condition "success or failure" Mar 12 00:30:05.941: INFO: Trying to get logs from node latest-worker pod downward-api-637176e3-9cef-4530-bebb-d9b78e412b30 container dapi-container: STEP: delete the pod Mar 12 00:30:05.967: INFO: Waiting for pod downward-api-637176e3-9cef-4530-bebb-d9b78e412b30 to disappear Mar 12 00:30:05.971: INFO: Pod downward-api-637176e3-9cef-4530-bebb-d9b78e412b30 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:30:05.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6373" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":205,"skipped":3130,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:30:05.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on node default medium Mar 12 00:30:06.032: INFO: Waiting up to 5m0s for pod "pod-13bf4276-0e11-466e-a544-14aeac1810a4" in namespace "emptydir-1218" to be "success or failure" Mar 12 00:30:06.069: INFO: Pod "pod-13bf4276-0e11-466e-a544-14aeac1810a4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.110267ms Mar 12 00:30:08.072: INFO: Pod "pod-13bf4276-0e11-466e-a544-14aeac1810a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040456313s STEP: Saw pod success Mar 12 00:30:08.072: INFO: Pod "pod-13bf4276-0e11-466e-a544-14aeac1810a4" satisfied condition "success or failure" Mar 12 00:30:08.074: INFO: Trying to get logs from node latest-worker pod pod-13bf4276-0e11-466e-a544-14aeac1810a4 container test-container: STEP: delete the pod Mar 12 00:30:08.112: INFO: Waiting for pod pod-13bf4276-0e11-466e-a544-14aeac1810a4 to disappear Mar 12 00:30:08.115: INFO: Pod pod-13bf4276-0e11-466e-a544-14aeac1810a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:30:08.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1218" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":206,"skipped":3163,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:30:08.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9972.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9972.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 00:30:12.200: INFO: DNS probes using dns-test-e76930be-bd6d-441e-a809-b6234eca298c succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9972.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9972.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 00:30:16.271: INFO: File wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:16.274: INFO: File jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:16.274: INFO: Lookups using dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd failed for: [wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local] Mar 12 00:30:21.283: INFO: File wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:21.289: INFO: File jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:21.289: INFO: Lookups using dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd failed for: [wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local] Mar 12 00:30:26.282: INFO: File wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:26.286: INFO: File jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:26.286: INFO: Lookups using dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd failed for: [wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local] Mar 12 00:30:31.278: INFO: File wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:31.281: INFO: File jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:31.281: INFO: Lookups using dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd failed for: [wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local] Mar 12 00:30:36.278: INFO: File wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:36.281: INFO: File jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:36.281: INFO: Lookups using dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd failed for: [wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local] Mar 12 00:30:41.282: INFO: File jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local from pod dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 00:30:41.282: INFO: Lookups using dns-9972/dns-test-9461c813-b10f-408e-91c4-f5997477d9dd failed for: [jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local] Mar 12 00:30:46.282: INFO: DNS probes using dns-test-9461c813-b10f-408e-91c4-f5997477d9dd succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9972.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9972.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9972.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9972.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 00:30:50.367: INFO: DNS probes using dns-test-c5e35ec1-13e0-4fb8-bea4-d96af42bcf26 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:30:50.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9972" for this suite. • [SLOW TEST:42.346 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":207,"skipped":3179,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:30:50.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3008.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3008.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3008.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3008.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3008.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3008.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 00:30:54.657: INFO: DNS probes using dns-3008/dns-test-752cc21a-104d-4e63-939c-82f00900873d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:30:54.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3008" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":208,"skipped":3183,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:30:54.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 12 00:30:57.317: INFO: Successfully updated pod "adopt-release-52v8h" STEP: Checking that the Job readopts the Pod Mar 12 00:30:57.317: INFO: Waiting up to 15m0s for pod "adopt-release-52v8h" in namespace "job-3759" to be "adopted" Mar 12 00:30:57.326: INFO: Pod "adopt-release-52v8h": Phase="Running", Reason="", readiness=true. Elapsed: 9.017336ms Mar 12 00:30:59.330: INFO: Pod "adopt-release-52v8h": Phase="Running", Reason="", readiness=true. Elapsed: 2.012726931s Mar 12 00:30:59.330: INFO: Pod "adopt-release-52v8h" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 12 00:30:59.841: INFO: Successfully updated pod "adopt-release-52v8h" STEP: Checking that the Job releases the Pod Mar 12 00:30:59.841: INFO: Waiting up to 15m0s for pod "adopt-release-52v8h" in namespace "job-3759" to be "released" Mar 12 00:30:59.865: INFO: Pod "adopt-release-52v8h": Phase="Running", Reason="", readiness=true. Elapsed: 23.389397ms Mar 12 00:30:59.865: INFO: Pod "adopt-release-52v8h" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:30:59.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3759" for this suite. • [SLOW TEST:5.195 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":209,"skipped":3241,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:30:59.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9122 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9122 I0312 00:31:00.105533 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9122, replica count: 2 I0312 00:31:03.156113 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 00:31:03.156: INFO: Creating new exec pod Mar 12 00:31:06.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-9122 execpodrxbzz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 12 00:31:06.402: INFO: stderr: "I0312 00:31:06.332937 3180 log.go:172] (0xc000770790) (0xc00078e0a0) Create stream\nI0312 00:31:06.332980 3180 log.go:172] (0xc000770790) (0xc00078e0a0) Stream added, broadcasting: 1\nI0312 00:31:06.335385 3180 log.go:172] (0xc000770790) Reply frame received for 1\nI0312 00:31:06.335419 3180 log.go:172] (0xc000770790) (0xc0006d1900) Create stream\nI0312 00:31:06.335431 3180 log.go:172] (0xc000770790) (0xc0006d1900) Stream added, broadcasting: 3\nI0312 00:31:06.336181 3180 log.go:172] (0xc000770790) Reply frame received for 3\nI0312 00:31:06.336207 3180 log.go:172] (0xc000770790) (0xc0006d1ae0) Create stream\nI0312 00:31:06.336217 3180 log.go:172] (0xc000770790) (0xc0006d1ae0) Stream added, broadcasting: 5\nI0312 00:31:06.336910 3180 log.go:172] (0xc000770790) Reply frame received for 5\nI0312 00:31:06.397114 3180 log.go:172] (0xc000770790) Data frame received for 5\nI0312 00:31:06.397140 3180 log.go:172] (0xc0006d1ae0) (5) Data frame handling\nI0312 00:31:06.397155 3180 log.go:172] (0xc0006d1ae0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0312 00:31:06.397359 3180 log.go:172] (0xc000770790) Data frame received for 5\nI0312 00:31:06.397386 3180 log.go:172] (0xc0006d1ae0) (5) Data frame handling\nI0312 00:31:06.397407 3180 log.go:172] (0xc0006d1ae0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0312 00:31:06.398205 3180 log.go:172] (0xc000770790) Data frame received for 3\nI0312 00:31:06.398228 3180 log.go:172] (0xc0006d1900) (3) Data frame handling\nI0312 00:31:06.398246 3180 log.go:172] (0xc000770790) Data frame received for 5\nI0312 00:31:06.398267 3180 log.go:172] (0xc0006d1ae0) (5) Data frame handling\nI0312 00:31:06.399621 3180 log.go:172] (0xc000770790) Data frame received for 1\nI0312 00:31:06.399642 3180 log.go:172] (0xc00078e0a0) (1) Data frame handling\nI0312 00:31:06.399652 3180 log.go:172] (0xc00078e0a0) (1) Data frame sent\nI0312 00:31:06.399660 3180 log.go:172] (0xc000770790) (0xc00078e0a0) Stream removed, broadcasting: 1\nI0312 00:31:06.399671 3180 log.go:172] (0xc000770790) Go away received\nI0312 00:31:06.399946 3180 log.go:172] (0xc000770790) (0xc00078e0a0) Stream removed, broadcasting: 1\nI0312 00:31:06.399961 3180 log.go:172] (0xc000770790) (0xc0006d1900) Stream removed, broadcasting: 3\nI0312 00:31:06.399967 3180 log.go:172] (0xc000770790) (0xc0006d1ae0) Stream removed, broadcasting: 5\n" Mar 12 00:31:06.402: INFO: stdout: "" Mar 12 00:31:06.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-9122 execpodrxbzz -- /bin/sh -x -c nc -zv -t -w 2 10.96.252.169 80' Mar 12 00:31:06.566: INFO: stderr: "I0312 00:31:06.502432 3200 log.go:172] (0xc00053e000) (0xc0005fe6e0) Create stream\nI0312 00:31:06.502472 3200 log.go:172] (0xc00053e000) (0xc0005fe6e0) Stream added, broadcasting: 1\nI0312 00:31:06.504175 3200 log.go:172] (0xc00053e000) Reply frame received for 1\nI0312 00:31:06.504198 3200 log.go:172] (0xc00053e000) (0xc0006a5e00) Create stream\nI0312 00:31:06.504204 3200 log.go:172] (0xc00053e000) (0xc0006a5e00) Stream added, broadcasting: 3\nI0312 00:31:06.504787 3200 log.go:172] (0xc00053e000) Reply frame received for 3\nI0312 00:31:06.504822 3200 log.go:172] (0xc00053e000) (0xc0003e3360) Create stream\nI0312 00:31:06.504834 3200 log.go:172] (0xc00053e000) (0xc0003e3360) Stream added, broadcasting: 5\nI0312 00:31:06.505709 3200 log.go:172] (0xc00053e000) Reply frame received for 5\nI0312 00:31:06.561218 3200 log.go:172] (0xc00053e000) Data frame received for 3\nI0312 00:31:06.561251 3200 log.go:172] (0xc0006a5e00) (3) Data frame handling\nI0312 00:31:06.561271 3200 log.go:172] (0xc00053e000) Data frame received for 5\nI0312 00:31:06.561279 3200 log.go:172] (0xc0003e3360) (5) Data frame handling\nI0312 00:31:06.561286 3200 log.go:172] (0xc0003e3360) (5) Data frame sent\nI0312 00:31:06.561293 3200 log.go:172] (0xc00053e000) Data frame received for 5\nI0312 00:31:06.561298 3200 log.go:172] (0xc0003e3360) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.252.169 80\nConnection to 10.96.252.169 80 port [tcp/http] succeeded!\nI0312 00:31:06.562588 3200 log.go:172] (0xc00053e000) Data frame received for 1\nI0312 00:31:06.562646 3200 log.go:172] (0xc0005fe6e0) (1) Data frame handling\nI0312 00:31:06.562656 3200 log.go:172] (0xc0005fe6e0) (1) Data frame sent\nI0312 00:31:06.562664 3200 log.go:172] (0xc00053e000) (0xc0005fe6e0) Stream removed, broadcasting: 1\nI0312 00:31:06.562675 3200 log.go:172] (0xc00053e000) Go away received\nI0312 00:31:06.562984 3200 log.go:172] (0xc00053e000) (0xc0005fe6e0) Stream removed, broadcasting: 1\nI0312 00:31:06.563002 3200 log.go:172] (0xc00053e000) (0xc0006a5e00) Stream removed, broadcasting: 3\nI0312 00:31:06.563008 3200 log.go:172] (0xc00053e000) (0xc0003e3360) Stream removed, broadcasting: 5\n" Mar 12 00:31:06.566: INFO: stdout: "" Mar 12 00:31:06.566: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:31:06.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9122" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:6.697 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":210,"skipped":3251,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:31:06.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0312 00:31:16.738312 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 00:31:16.738: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:31:16.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6028" for this suite. • [SLOW TEST:10.110 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":211,"skipped":3308,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:31:16.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override command Mar 12 00:31:16.789: INFO: Waiting up to 5m0s for pod "client-containers-9d2447c7-667e-4192-b567-45c2196f024b" in namespace "containers-8671" to be "success or failure" Mar 12 00:31:16.812: INFO: Pod "client-containers-9d2447c7-667e-4192-b567-45c2196f024b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.294683ms Mar 12 00:31:18.816: INFO: Pod "client-containers-9d2447c7-667e-4192-b567-45c2196f024b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026771318s STEP: Saw pod success Mar 12 00:31:18.816: INFO: Pod "client-containers-9d2447c7-667e-4192-b567-45c2196f024b" satisfied condition "success or failure" Mar 12 00:31:18.818: INFO: Trying to get logs from node latest-worker pod client-containers-9d2447c7-667e-4192-b567-45c2196f024b container test-container: STEP: delete the pod Mar 12 00:31:18.849: INFO: Waiting for pod client-containers-9d2447c7-667e-4192-b567-45c2196f024b to disappear Mar 12 00:31:18.872: INFO: Pod client-containers-9d2447c7-667e-4192-b567-45c2196f024b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:31:18.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8671" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":212,"skipped":3317,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:31:18.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 12 00:31:21.454: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d7c63a3c-2ccf-427c-8677-92ea86036d13" Mar 12 00:31:21.454: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d7c63a3c-2ccf-427c-8677-92ea86036d13" in namespace "pods-3907" to be "terminated due to deadline exceeded" Mar 12 00:31:21.463: INFO: Pod "pod-update-activedeadlineseconds-d7c63a3c-2ccf-427c-8677-92ea86036d13": Phase="Running", Reason="", readiness=true. Elapsed: 8.883071ms Mar 12 00:31:23.470: INFO: Pod "pod-update-activedeadlineseconds-d7c63a3c-2ccf-427c-8677-92ea86036d13": Phase="Running", Reason="", readiness=true. Elapsed: 2.015693102s Mar 12 00:31:25.474: INFO: Pod "pod-update-activedeadlineseconds-d7c63a3c-2ccf-427c-8677-92ea86036d13": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.019572583s Mar 12 00:31:25.474: INFO: Pod "pod-update-activedeadlineseconds-d7c63a3c-2ccf-427c-8677-92ea86036d13" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:31:25.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3907" for this suite. • [SLOW TEST:6.603 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":213,"skipped":3319,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:31:25.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:31:31.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7725" for this suite. STEP: Destroying namespace "nsdeletetest-3309" for this suite. Mar 12 00:31:31.734: INFO: Namespace nsdeletetest-3309 was already deleted STEP: Destroying namespace "nsdeletetest-9939" for this suite. • [SLOW TEST:6.254 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":214,"skipped":3333,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:31:31.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 00:31:32.226: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 00:31:34.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569892, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569892, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569892, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719569892, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:31:37.267: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:31:37.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9412" for this suite. STEP: Destroying namespace "webhook-9412-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.071 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":215,"skipped":3365,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:31:37.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 12 00:31:37.850: INFO: Waiting up to 5m0s for pod "pod-dfecae19-1c7d-49c9-adf4-35cfff30beee" in namespace "emptydir-7505" to be "success or failure" Mar 12 00:31:37.873: INFO: Pod "pod-dfecae19-1c7d-49c9-adf4-35cfff30beee": Phase="Pending", Reason="", readiness=false. Elapsed: 22.957742ms Mar 12 00:31:39.877: INFO: Pod "pod-dfecae19-1c7d-49c9-adf4-35cfff30beee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026700088s STEP: Saw pod success Mar 12 00:31:39.877: INFO: Pod "pod-dfecae19-1c7d-49c9-adf4-35cfff30beee" satisfied condition "success or failure" Mar 12 00:31:39.879: INFO: Trying to get logs from node latest-worker pod pod-dfecae19-1c7d-49c9-adf4-35cfff30beee container test-container: STEP: delete the pod Mar 12 00:31:39.898: INFO: Waiting for pod pod-dfecae19-1c7d-49c9-adf4-35cfff30beee to disappear Mar 12 00:31:39.926: INFO: Pod pod-dfecae19-1c7d-49c9-adf4-35cfff30beee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:31:39.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7505" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":216,"skipped":3378,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:31:39.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-3429 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 00:31:39.970: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 12 00:31:40.017: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 12 00:31:42.020: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:31:44.020: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:31:46.020: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:31:48.021: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:31:50.020: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:31:52.020: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:31:54.020: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:31:56.020: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 12 00:31:56.024: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 12 00:31:58.027: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 12 00:32:00.027: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 12 00:32:02.071: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.154 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3429 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:32:02.071: INFO: >>> kubeConfig: /root/.kube/config I0312 00:32:02.105215 7 log.go:172] (0xc0023a02c0) (0xc001f64d20) Create stream I0312 00:32:02.105244 7 log.go:172] (0xc0023a02c0) (0xc001f64d20) Stream added, broadcasting: 1 I0312 00:32:02.107600 7 log.go:172] (0xc0023a02c0) Reply frame received for 1 I0312 00:32:02.107635 7 log.go:172] (0xc0023a02c0) (0xc001f64dc0) Create stream I0312 00:32:02.107644 7 log.go:172] (0xc0023a02c0) (0xc001f64dc0) Stream added, broadcasting: 3 I0312 00:32:02.108384 7 log.go:172] (0xc0023a02c0) Reply frame received for 3 I0312 00:32:02.108401 7 log.go:172] (0xc0023a02c0) (0xc001d82500) Create stream I0312 00:32:02.108409 7 log.go:172] (0xc0023a02c0) (0xc001d82500) Stream added, broadcasting: 5 I0312 00:32:02.109296 7 log.go:172] (0xc0023a02c0) Reply frame received for 5 I0312 00:32:03.177761 7 log.go:172] (0xc0023a02c0) Data frame received for 3 I0312 00:32:03.177796 7 log.go:172] (0xc001f64dc0) (3) Data frame handling I0312 00:32:03.177822 7 log.go:172] (0xc001f64dc0) (3) Data frame sent I0312 00:32:03.177839 7 log.go:172] (0xc0023a02c0) Data frame received for 3 I0312 00:32:03.177853 7 log.go:172] (0xc001f64dc0) (3) Data frame handling I0312 00:32:03.178166 7 log.go:172] (0xc0023a02c0) Data frame received for 5 I0312 00:32:03.178195 7 log.go:172] (0xc001d82500) (5) Data frame handling I0312 00:32:03.179727 7 log.go:172] (0xc0023a02c0) Data frame received for 1 I0312 00:32:03.179746 7 log.go:172] (0xc001f64d20) (1) Data frame handling I0312 00:32:03.179756 7 log.go:172] (0xc001f64d20) (1) Data frame sent I0312 00:32:03.179780 7 log.go:172] (0xc0023a02c0) (0xc001f64d20) Stream removed, broadcasting: 1 I0312 00:32:03.179813 7 log.go:172] (0xc0023a02c0) Go away received I0312 00:32:03.179928 7 log.go:172] (0xc0023a02c0) (0xc001f64d20) Stream removed, broadcasting: 1 I0312 00:32:03.179949 7 log.go:172] (0xc0023a02c0) (0xc001f64dc0) Stream removed, broadcasting: 3 I0312 00:32:03.179963 7 log.go:172] (0xc0023a02c0) (0xc001d82500) Stream removed, broadcasting: 5 Mar 12 00:32:03.179: INFO: Found all expected endpoints: [netserver-0] Mar 12 00:32:03.183: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.34 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3429 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:32:03.183: INFO: >>> kubeConfig: /root/.kube/config I0312 00:32:03.212410 7 log.go:172] (0xc002a24e70) (0xc001776820) Create stream I0312 00:32:03.212444 7 log.go:172] (0xc002a24e70) (0xc001776820) Stream added, broadcasting: 1 I0312 00:32:03.214935 7 log.go:172] (0xc002a24e70) Reply frame received for 1 I0312 00:32:03.214966 7 log.go:172] (0xc002a24e70) (0xc0017768c0) Create stream I0312 00:32:03.214977 7 log.go:172] (0xc002a24e70) (0xc0017768c0) Stream added, broadcasting: 3 I0312 00:32:03.216097 7 log.go:172] (0xc002a24e70) Reply frame received for 3 I0312 00:32:03.216125 7 log.go:172] (0xc002a24e70) (0xc001d826e0) Create stream I0312 00:32:03.216136 7 log.go:172] (0xc002a24e70) (0xc001d826e0) Stream added, broadcasting: 5 I0312 00:32:03.217140 7 log.go:172] (0xc002a24e70) Reply frame received for 5 I0312 00:32:04.268586 7 log.go:172] (0xc002a24e70) Data frame received for 3 I0312 00:32:04.268612 7 log.go:172] (0xc0017768c0) (3) Data frame handling I0312 00:32:04.268629 7 log.go:172] (0xc0017768c0) (3) Data frame sent I0312 00:32:04.268642 7 log.go:172] (0xc002a24e70) Data frame received for 3 I0312 00:32:04.268654 7 log.go:172] (0xc0017768c0) (3) Data frame handling I0312 00:32:04.268832 7 log.go:172] (0xc002a24e70) Data frame received for 5 I0312 00:32:04.268894 7 log.go:172] (0xc001d826e0) (5) Data frame handling I0312 00:32:04.270934 7 log.go:172] (0xc002a24e70) Data frame received for 1 I0312 00:32:04.270952 7 log.go:172] (0xc001776820) (1) Data frame handling I0312 00:32:04.270963 7 log.go:172] (0xc001776820) (1) Data frame sent I0312 00:32:04.271049 7 log.go:172] (0xc002a24e70) (0xc001776820) Stream removed, broadcasting: 1 I0312 00:32:04.271122 7 log.go:172] (0xc002a24e70) (0xc001776820) Stream removed, broadcasting: 1 I0312 00:32:04.271137 7 log.go:172] (0xc002a24e70) (0xc0017768c0) Stream removed, broadcasting: 3 I0312 00:32:04.271149 7 log.go:172] (0xc002a24e70) (0xc001d826e0) Stream removed, broadcasting: 5 Mar 12 00:32:04.271: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:04.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0312 00:32:04.271444 7 log.go:172] (0xc002a24e70) Go away received STEP: Destroying namespace "pod-network-test-3429" for this suite. • [SLOW TEST:24.343 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":217,"skipped":3403,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:04.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 12 00:32:04.373: INFO: Waiting up to 5m0s for pod "pod-e0f1d529-e3b4-4b1d-8ba3-c3c14d12a4ff" in namespace "emptydir-8198" to be "success or failure" Mar 12 00:32:04.376: INFO: Pod "pod-e0f1d529-e3b4-4b1d-8ba3-c3c14d12a4ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.044453ms Mar 12 00:32:06.380: INFO: Pod "pod-e0f1d529-e3b4-4b1d-8ba3-c3c14d12a4ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006988309s STEP: Saw pod success Mar 12 00:32:06.380: INFO: Pod "pod-e0f1d529-e3b4-4b1d-8ba3-c3c14d12a4ff" satisfied condition "success or failure" Mar 12 00:32:06.382: INFO: Trying to get logs from node latest-worker pod pod-e0f1d529-e3b4-4b1d-8ba3-c3c14d12a4ff container test-container: STEP: delete the pod Mar 12 00:32:06.402: INFO: Waiting for pod pod-e0f1d529-e3b4-4b1d-8ba3-c3c14d12a4ff to disappear Mar 12 00:32:06.406: INFO: Pod pod-e0f1d529-e3b4-4b1d-8ba3-c3c14d12a4ff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:06.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8198" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":218,"skipped":3515,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:06.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-0311b33f-f1a6-4676-b553-ae6e688644ad STEP: Creating a pod to test consume configMaps Mar 12 00:32:06.519: INFO: Waiting up to 5m0s for pod "pod-configmaps-66965c9b-9116-42c9-9c84-627777f7fe95" in namespace "configmap-7698" to be "success or failure" Mar 12 00:32:06.550: INFO: Pod "pod-configmaps-66965c9b-9116-42c9-9c84-627777f7fe95": Phase="Pending", Reason="", readiness=false. Elapsed: 30.447859ms Mar 12 00:32:08.553: INFO: Pod "pod-configmaps-66965c9b-9116-42c9-9c84-627777f7fe95": Phase="Running", Reason="", readiness=true. Elapsed: 2.033670542s Mar 12 00:32:10.556: INFO: Pod "pod-configmaps-66965c9b-9116-42c9-9c84-627777f7fe95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036358197s STEP: Saw pod success Mar 12 00:32:10.556: INFO: Pod "pod-configmaps-66965c9b-9116-42c9-9c84-627777f7fe95" satisfied condition "success or failure" Mar 12 00:32:10.557: INFO: Trying to get logs from node latest-worker pod pod-configmaps-66965c9b-9116-42c9-9c84-627777f7fe95 container configmap-volume-test: STEP: delete the pod Mar 12 00:32:10.585: INFO: Waiting for pod pod-configmaps-66965c9b-9116-42c9-9c84-627777f7fe95 to disappear Mar 12 00:32:10.603: INFO: Pod pod-configmaps-66965c9b-9116-42c9-9c84-627777f7fe95 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:10.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7698" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":219,"skipped":3522,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:10.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-9418/configmap-test-e33164c3-c599-407f-90c9-0e9751875cf8 STEP: Creating a pod to test consume configMaps Mar 12 00:32:10.663: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7e3728c-3dd9-4ea4-81b2-ab666d165b3b" in namespace "configmap-9418" to be "success or failure" Mar 12 00:32:10.668: INFO: Pod "pod-configmaps-d7e3728c-3dd9-4ea4-81b2-ab666d165b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.736578ms Mar 12 00:32:12.671: INFO: Pod "pod-configmaps-d7e3728c-3dd9-4ea4-81b2-ab666d165b3b": Phase="Running", Reason="", readiness=true. Elapsed: 2.008044162s Mar 12 00:32:14.676: INFO: Pod "pod-configmaps-d7e3728c-3dd9-4ea4-81b2-ab666d165b3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012902197s STEP: Saw pod success Mar 12 00:32:14.676: INFO: Pod "pod-configmaps-d7e3728c-3dd9-4ea4-81b2-ab666d165b3b" satisfied condition "success or failure" Mar 12 00:32:14.679: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d7e3728c-3dd9-4ea4-81b2-ab666d165b3b container env-test: STEP: delete the pod Mar 12 00:32:14.694: INFO: Waiting for pod pod-configmaps-d7e3728c-3dd9-4ea4-81b2-ab666d165b3b to disappear Mar 12 00:32:14.698: INFO: Pod pod-configmaps-d7e3728c-3dd9-4ea4-81b2-ab666d165b3b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:14.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9418" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":220,"skipped":3532,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:14.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:32:14.795: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"705ee61f-b5ea-4ef9-9c3c-a4d9be6c8de3", Controller:(*bool)(0xc002250bb2), BlockOwnerDeletion:(*bool)(0xc002250bb3)}} Mar 12 00:32:14.838: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f69cf86f-9814-46d3-bc37-4ff8f52e7ed8", Controller:(*bool)(0xc0021488da), BlockOwnerDeletion:(*bool)(0xc0021488db)}} Mar 12 00:32:14.844: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"24a155c9-9497-4494-82e1-dc374724b53d", Controller:(*bool)(0xc002251072), BlockOwnerDeletion:(*bool)(0xc002251073)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:19.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6061" for this suite. • [SLOW TEST:5.164 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":221,"skipped":3545,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:19.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 12 00:32:22.528: INFO: Successfully updated pod "annotationupdate615b91f6-289d-4327-83d6-53e6636151d9" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:24.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1453" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":222,"skipped":3552,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:24.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 12 00:32:29.175: INFO: Successfully updated pod "annotationupdate45a286c8-f3eb-49a9-aacf-be1b984a153a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:31.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1110" for this suite. • [SLOW TEST:6.683 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":223,"skipped":3555,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:31.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-31cf9c5e-bebc-4cc3-a19d-bd835ee65e7a STEP: Creating a pod to test consume secrets Mar 12 00:32:31.334: INFO: Waiting up to 5m0s for pod "pod-secrets-613ff8eb-3262-4ab0-b098-dfd573575662" in namespace "secrets-5453" to be "success or failure" Mar 12 00:32:31.338: INFO: Pod "pod-secrets-613ff8eb-3262-4ab0-b098-dfd573575662": Phase="Pending", Reason="", readiness=false. Elapsed: 3.211916ms Mar 12 00:32:33.341: INFO: Pod "pod-secrets-613ff8eb-3262-4ab0-b098-dfd573575662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006205691s Mar 12 00:32:35.344: INFO: Pod "pod-secrets-613ff8eb-3262-4ab0-b098-dfd573575662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009491526s STEP: Saw pod success Mar 12 00:32:35.344: INFO: Pod "pod-secrets-613ff8eb-3262-4ab0-b098-dfd573575662" satisfied condition "success or failure" Mar 12 00:32:35.346: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-613ff8eb-3262-4ab0-b098-dfd573575662 container secret-volume-test: STEP: delete the pod Mar 12 00:32:35.383: INFO: Waiting for pod pod-secrets-613ff8eb-3262-4ab0-b098-dfd573575662 to disappear Mar 12 00:32:35.394: INFO: Pod pod-secrets-613ff8eb-3262-4ab0-b098-dfd573575662 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:35.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5453" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":224,"skipped":3641,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:35.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:35.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1469" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":225,"skipped":3642,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:35.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 12 00:32:35.681: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 12 00:32:40.684: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:40.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4259" for this suite. • [SLOW TEST:5.240 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":226,"skipped":3647,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:40.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:32:45.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9625" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":227,"skipped":3654,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:32:45.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Mar 12 00:32:45.687: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 00:32:45.706: INFO: Waiting for terminating namespaces to be deleted... Mar 12 00:32:45.708: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 12 00:32:45.712: INFO: pod-release-qx9sn from replication-controller-4259 started at 2020-03-12 00:32:40 +0000 UTC (1 container statuses recorded) Mar 12 00:32:45.713: INFO: Container pod-release ready: true, restart count 0 Mar 12 00:32:45.713: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 12 00:32:45.713: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 00:32:45.713: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 12 00:32:45.713: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 00:32:45.713: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 12 00:32:45.717: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 12 00:32:45.717: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 00:32:45.717: INFO: pod-release-q7zkz from replication-controller-4259 started at 2020-03-12 00:32:35 +0000 UTC (1 container statuses recorded) Mar 12 00:32:45.717: INFO: Container pod-release ready: true, restart count 0 Mar 12 00:32:45.717: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 12 00:32:45.717: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 00:32:45.717: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 12 00:32:45.717: INFO: Container coredns ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a344360b-2933-4681-8261-6f45c192b067 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-a344360b-2933-4681-8261-6f45c192b067 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-a344360b-2933-4681-8261-6f45c192b067 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:37:51.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6213" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:306.258 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":228,"skipped":3657,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:37:51.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-bacd4268-e030-42e3-9e96-8bcf751dac91 STEP: Creating a pod to test consume configMaps Mar 12 00:37:51.992: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d52b84b-1c0a-4eb3-888e-bb84831fc816" in namespace "configmap-3150" to be "success or failure" Mar 12 00:37:52.001: INFO: Pod "pod-configmaps-0d52b84b-1c0a-4eb3-888e-bb84831fc816": Phase="Pending", Reason="", readiness=false. Elapsed: 8.827465ms Mar 12 00:37:54.004: INFO: Pod "pod-configmaps-0d52b84b-1c0a-4eb3-888e-bb84831fc816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012380543s STEP: Saw pod success Mar 12 00:37:54.005: INFO: Pod "pod-configmaps-0d52b84b-1c0a-4eb3-888e-bb84831fc816" satisfied condition "success or failure" Mar 12 00:37:54.007: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0d52b84b-1c0a-4eb3-888e-bb84831fc816 container configmap-volume-test: STEP: delete the pod Mar 12 00:37:54.071: INFO: Waiting for pod pod-configmaps-0d52b84b-1c0a-4eb3-888e-bb84831fc816 to disappear Mar 12 00:37:54.079: INFO: Pod pod-configmaps-0d52b84b-1c0a-4eb3-888e-bb84831fc816 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:37:54.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3150" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":229,"skipped":3663,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:37:54.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 12 00:37:54.134: INFO: Created pod &Pod{ObjectMeta:{dns-2088 dns-2088 /api/v1/namespaces/dns-2088/pods/dns-2088 68bcd673-7141-483f-9719-c27a91848933 947719 0 2020-03-12 00:37:54 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ztm4c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ztm4c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ztm4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:37:54.154: INFO: The status of Pod dns-2088 is Pending, waiting for it to be Running (with Ready = true) Mar 12 00:37:56.158: INFO: The status of Pod dns-2088 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 12 00:37:56.158: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2088 PodName:dns-2088 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:37:56.158: INFO: >>> kubeConfig: /root/.kube/config I0312 00:37:56.195467 7 log.go:172] (0xc001df4630) (0xc001275860) Create stream I0312 00:37:56.195508 7 log.go:172] (0xc001df4630) (0xc001275860) Stream added, broadcasting: 1 I0312 00:37:56.198211 7 log.go:172] (0xc001df4630) Reply frame received for 1 I0312 00:37:56.198254 7 log.go:172] (0xc001df4630) (0xc0017e6000) Create stream I0312 00:37:56.198274 7 log.go:172] (0xc001df4630) (0xc0017e6000) Stream added, broadcasting: 3 I0312 00:37:56.199491 7 log.go:172] (0xc001df4630) Reply frame received for 3 I0312 00:37:56.199524 7 log.go:172] (0xc001df4630) (0xc001275900) Create stream I0312 00:37:56.199537 7 log.go:172] (0xc001df4630) (0xc001275900) Stream added, broadcasting: 5 I0312 00:37:56.200707 7 log.go:172] (0xc001df4630) Reply frame received for 5 I0312 00:37:56.292548 7 log.go:172] (0xc001df4630) Data frame received for 3 I0312 00:37:56.292568 7 log.go:172] (0xc0017e6000) (3) Data frame handling I0312 00:37:56.292583 7 log.go:172] (0xc0017e6000) (3) Data frame sent I0312 00:37:56.293840 7 log.go:172] (0xc001df4630) Data frame received for 5 I0312 00:37:56.293861 7 log.go:172] (0xc001275900) (5) Data frame handling I0312 00:37:56.293892 7 log.go:172] (0xc001df4630) Data frame received for 3 I0312 00:37:56.293906 7 log.go:172] (0xc0017e6000) (3) Data frame handling I0312 00:37:56.295281 7 log.go:172] (0xc001df4630) Data frame received for 1 I0312 00:37:56.295314 7 log.go:172] (0xc001275860) (1) Data frame handling I0312 00:37:56.295326 7 log.go:172] (0xc001275860) (1) Data frame sent I0312 00:37:56.295337 7 log.go:172] (0xc001df4630) (0xc001275860) Stream removed, broadcasting: 1 I0312 00:37:56.295436 7 log.go:172] (0xc001df4630) (0xc001275860) Stream removed, broadcasting: 1 I0312 00:37:56.295448 7 log.go:172] (0xc001df4630) (0xc0017e6000) Stream removed, broadcasting: 3 I0312 00:37:56.295458 7 log.go:172] (0xc001df4630) (0xc001275900) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 12 00:37:56.295: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2088 PodName:dns-2088 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:37:56.295: INFO: >>> kubeConfig: /root/.kube/config I0312 00:37:56.295745 7 log.go:172] (0xc001df4630) Go away received I0312 00:37:56.320080 7 log.go:172] (0xc001d36420) (0xc00136c460) Create stream I0312 00:37:56.320107 7 log.go:172] (0xc001d36420) (0xc00136c460) Stream added, broadcasting: 1 I0312 00:37:56.322269 7 log.go:172] (0xc001d36420) Reply frame received for 1 I0312 00:37:56.322300 7 log.go:172] (0xc001d36420) (0xc001df60a0) Create stream I0312 00:37:56.322310 7 log.go:172] (0xc001d36420) (0xc001df60a0) Stream added, broadcasting: 3 I0312 00:37:56.323084 7 log.go:172] (0xc001d36420) Reply frame received for 3 I0312 00:37:56.323167 7 log.go:172] (0xc001d36420) (0xc001df6320) Create stream I0312 00:37:56.323191 7 log.go:172] (0xc001d36420) (0xc001df6320) Stream added, broadcasting: 5 I0312 00:37:56.324091 7 log.go:172] (0xc001d36420) Reply frame received for 5 I0312 00:37:56.399667 7 log.go:172] (0xc001d36420) Data frame received for 3 I0312 00:37:56.399687 7 log.go:172] (0xc001df60a0) (3) Data frame handling I0312 00:37:56.399704 7 log.go:172] (0xc001df60a0) (3) Data frame sent I0312 00:37:56.400057 7 log.go:172] (0xc001d36420) Data frame received for 5 I0312 00:37:56.400078 7 log.go:172] (0xc001df6320) (5) Data frame handling I0312 00:37:56.400102 7 log.go:172] (0xc001d36420) Data frame received for 3 I0312 00:37:56.400111 7 log.go:172] (0xc001df60a0) (3) Data frame handling I0312 00:37:56.401219 7 log.go:172] (0xc001d36420) Data frame received for 1 I0312 00:37:56.401242 7 log.go:172] (0xc00136c460) (1) Data frame handling I0312 00:37:56.401293 7 log.go:172] (0xc00136c460) (1) Data frame sent I0312 00:37:56.401311 7 log.go:172] (0xc001d36420) (0xc00136c460) Stream removed, broadcasting: 1 I0312 00:37:56.401327 7 log.go:172] (0xc001d36420) Go away received I0312 00:37:56.401448 7 log.go:172] (0xc001d36420) (0xc00136c460) Stream removed, broadcasting: 1 I0312 00:37:56.401468 7 log.go:172] (0xc001d36420) (0xc001df60a0) Stream removed, broadcasting: 3 I0312 00:37:56.401506 7 log.go:172] (0xc001d36420) (0xc001df6320) Stream removed, broadcasting: 5 Mar 12 00:37:56.401: INFO: Deleting pod dns-2088... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:37:56.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2088" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":230,"skipped":3684,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:37:56.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 00:37:56.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1137' Mar 12 00:37:56.574: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 00:37:56.574: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 12 00:37:56.596: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-m67kn] Mar 12 00:37:56.596: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-m67kn" in namespace "kubectl-1137" to be "running and ready" Mar 12 00:37:56.608: INFO: Pod "e2e-test-httpd-rc-m67kn": Phase="Pending", Reason="", readiness=false. Elapsed: 11.404787ms Mar 12 00:37:58.611: INFO: Pod "e2e-test-httpd-rc-m67kn": Phase="Running", Reason="", readiness=true. Elapsed: 2.014698246s Mar 12 00:37:58.611: INFO: Pod "e2e-test-httpd-rc-m67kn" satisfied condition "running and ready" Mar 12 00:37:58.611: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-m67kn] Mar 12 00:37:58.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-1137' Mar 12 00:37:58.722: INFO: stderr: "" Mar 12 00:37:58.722: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.168. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.168. Set the 'ServerName' directive globally to suppress this message\n[Thu Mar 12 00:37:57.783104 2020] [mpm_event:notice] [pid 1:tid 140188936936296] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Mar 12 00:37:57.783166 2020] [core:notice] [pid 1:tid 140188936936296] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639 Mar 12 00:37:58.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1137' Mar 12 00:37:58.803: INFO: stderr: "" Mar 12 00:37:58.803: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:37:58.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1137" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":280,"completed":231,"skipped":3695,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:37:58.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service endpoint-test2 in namespace services-4572 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4572 to expose endpoints map[] Mar 12 00:37:58.945: INFO: Get endpoints failed (13.543813ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 12 00:37:59.947: INFO: successfully validated that service endpoint-test2 in namespace services-4572 exposes endpoints map[] (1.015642816s elapsed) STEP: Creating pod pod1 in namespace services-4572 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4572 to expose endpoints map[pod1:[80]] Mar 12 00:38:02.008: INFO: successfully validated that service endpoint-test2 in namespace services-4572 exposes endpoints map[pod1:[80]] (2.057247627s elapsed) STEP: Creating pod pod2 in namespace services-4572 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4572 to expose endpoints map[pod1:[80] pod2:[80]] Mar 12 00:38:04.080: INFO: successfully validated that service endpoint-test2 in namespace services-4572 exposes endpoints map[pod1:[80] pod2:[80]] (2.069016488s elapsed) STEP: Deleting pod pod1 in namespace services-4572 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4572 to expose endpoints map[pod2:[80]] Mar 12 00:38:05.103: INFO: successfully validated that service endpoint-test2 in namespace services-4572 exposes endpoints map[pod2:[80]] (1.019949785s elapsed) STEP: Deleting pod pod2 in namespace services-4572 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4572 to expose endpoints map[] Mar 12 00:38:06.131: INFO: successfully validated that service endpoint-test2 in namespace services-4572 exposes endpoints map[] (1.02445537s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:38:06.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4572" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:7.375 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":280,"completed":232,"skipped":3711,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:38:06.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:38:22.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1992" for this suite. • [SLOW TEST:16.178 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":233,"skipped":3717,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:38:22.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-39aa174f-e895-4165-8014-65567fc93fed STEP: Creating a pod to test consume configMaps Mar 12 00:38:22.443: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a2a84f22-fe0a-417c-b5f8-a67a17467142" in namespace "projected-5514" to be "success or failure" Mar 12 00:38:22.478: INFO: Pod "pod-projected-configmaps-a2a84f22-fe0a-417c-b5f8-a67a17467142": Phase="Pending", Reason="", readiness=false. Elapsed: 35.101127ms Mar 12 00:38:24.481: INFO: Pod "pod-projected-configmaps-a2a84f22-fe0a-417c-b5f8-a67a17467142": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.038045233s STEP: Saw pod success Mar 12 00:38:24.481: INFO: Pod "pod-projected-configmaps-a2a84f22-fe0a-417c-b5f8-a67a17467142" satisfied condition "success or failure" Mar 12 00:38:24.484: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-a2a84f22-fe0a-417c-b5f8-a67a17467142 container projected-configmap-volume-test: STEP: delete the pod Mar 12 00:38:24.503: INFO: Waiting for pod pod-projected-configmaps-a2a84f22-fe0a-417c-b5f8-a67a17467142 to disappear Mar 12 00:38:24.507: INFO: Pod pod-projected-configmaps-a2a84f22-fe0a-417c-b5f8-a67a17467142 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:38:24.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5514" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":234,"skipped":3742,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:38:24.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 12 00:38:24.584: INFO: Waiting up to 5m0s for pod "pod-fe7f9d1c-831a-45af-9b10-79722b4c5a97" in namespace "emptydir-2392" to be "success or failure" Mar 12 00:38:24.599: INFO: Pod "pod-fe7f9d1c-831a-45af-9b10-79722b4c5a97": Phase="Pending", Reason="", readiness=false. Elapsed: 14.555973ms Mar 12 00:38:26.602: INFO: Pod "pod-fe7f9d1c-831a-45af-9b10-79722b4c5a97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017764229s STEP: Saw pod success Mar 12 00:38:26.602: INFO: Pod "pod-fe7f9d1c-831a-45af-9b10-79722b4c5a97" satisfied condition "success or failure" Mar 12 00:38:26.605: INFO: Trying to get logs from node latest-worker pod pod-fe7f9d1c-831a-45af-9b10-79722b4c5a97 container test-container: STEP: delete the pod Mar 12 00:38:26.620: INFO: Waiting for pod pod-fe7f9d1c-831a-45af-9b10-79722b4c5a97 to disappear Mar 12 00:38:26.624: INFO: Pod pod-fe7f9d1c-831a-45af-9b10-79722b4c5a97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:38:26.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2392" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":235,"skipped":3744,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:38:26.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-de186936-0fe3-4e3c-ac30-b5940048fda1 STEP: Creating configMap with name cm-test-opt-upd-e5248d06-00d5-495c-a52d-a0a67144b5b0 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-de186936-0fe3-4e3c-ac30-b5940048fda1 STEP: Updating configmap cm-test-opt-upd-e5248d06-00d5-495c-a52d-a0a67144b5b0 STEP: Creating configMap with name cm-test-opt-create-9c732c16-f397-4e18-a9c5-04a1df1d8b0e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:39:57.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8256" for this suite. • [SLOW TEST:90.577 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":236,"skipped":3780,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:39:57.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Mar 12 00:39:57.264: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 12 00:39:57.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4177' Mar 12 00:39:59.487: INFO: stderr: "" Mar 12 00:39:59.487: INFO: stdout: "service/agnhost-slave created\n" Mar 12 00:39:59.487: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 12 00:39:59.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4177' Mar 12 00:39:59.791: INFO: stderr: "" Mar 12 00:39:59.791: INFO: stdout: "service/agnhost-master created\n" Mar 12 00:39:59.791: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 12 00:39:59.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4177' Mar 12 00:40:00.076: INFO: stderr: "" Mar 12 00:40:00.076: INFO: stdout: "service/frontend created\n" Mar 12 00:40:00.077: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 12 00:40:00.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4177' Mar 12 00:40:00.313: INFO: stderr: "" Mar 12 00:40:00.313: INFO: stdout: "deployment.apps/frontend created\n" Mar 12 00:40:00.313: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 12 00:40:00.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4177' Mar 12 00:40:00.572: INFO: stderr: "" Mar 12 00:40:00.572: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 12 00:40:00.572: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 12 00:40:00.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4177' Mar 12 00:40:00.802: INFO: stderr: "" Mar 12 00:40:00.802: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 12 00:40:00.802: INFO: Waiting for all frontend pods to be Running. Mar 12 00:40:05.853: INFO: Waiting for frontend to serve content. Mar 12 00:40:05.863: INFO: Trying to add a new entry to the guestbook. Mar 12 00:40:05.873: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 12 00:40:05.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4177' Mar 12 00:40:06.015: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 00:40:06.015: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 12 00:40:06.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4177' Mar 12 00:40:06.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 00:40:06.157: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 12 00:40:06.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4177' Mar 12 00:40:06.251: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 00:40:06.251: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 12 00:40:06.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4177' Mar 12 00:40:06.323: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 00:40:06.323: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 12 00:40:06.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4177' Mar 12 00:40:06.401: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 00:40:06.401: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 12 00:40:06.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4177' Mar 12 00:40:06.467: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 00:40:06.467: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:40:06.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4177" for this suite. • [SLOW TEST:9.264 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":280,"completed":237,"skipped":3785,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:40:06.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-b9a72fe1-4e11-4657-b05e-d7f7f29df31a STEP: Creating a pod to test consume configMaps Mar 12 00:40:06.549: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eb29d448-a50c-44e3-839d-c391a22311aa" in namespace "projected-5989" to be "success or failure" Mar 12 00:40:06.555: INFO: Pod "pod-projected-configmaps-eb29d448-a50c-44e3-839d-c391a22311aa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.996563ms Mar 12 00:40:08.558: INFO: Pod "pod-projected-configmaps-eb29d448-a50c-44e3-839d-c391a22311aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009507283s Mar 12 00:40:10.562: INFO: Pod "pod-projected-configmaps-eb29d448-a50c-44e3-839d-c391a22311aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012921797s STEP: Saw pod success Mar 12 00:40:10.562: INFO: Pod "pod-projected-configmaps-eb29d448-a50c-44e3-839d-c391a22311aa" satisfied condition "success or failure" Mar 12 00:40:10.564: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-eb29d448-a50c-44e3-839d-c391a22311aa container projected-configmap-volume-test: STEP: delete the pod Mar 12 00:40:10.600: INFO: Waiting for pod pod-projected-configmaps-eb29d448-a50c-44e3-839d-c391a22311aa to disappear Mar 12 00:40:10.603: INFO: Pod pod-projected-configmaps-eb29d448-a50c-44e3-839d-c391a22311aa no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:40:10.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5989" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":238,"skipped":3811,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:40:10.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Mar 12 00:40:10.660: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08d209ff-98b7-4173-a060-3564574aeeef" in namespace "downward-api-2035" to be "success or failure" Mar 12 00:40:10.689: INFO: Pod "downwardapi-volume-08d209ff-98b7-4173-a060-3564574aeeef": Phase="Pending", Reason="", readiness=false. Elapsed: 28.551527ms Mar 12 00:40:12.692: INFO: Pod "downwardapi-volume-08d209ff-98b7-4173-a060-3564574aeeef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.032076321s STEP: Saw pod success Mar 12 00:40:12.692: INFO: Pod "downwardapi-volume-08d209ff-98b7-4173-a060-3564574aeeef" satisfied condition "success or failure" Mar 12 00:40:12.694: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-08d209ff-98b7-4173-a060-3564574aeeef container client-container: STEP: delete the pod Mar 12 00:40:12.738: INFO: Waiting for pod downwardapi-volume-08d209ff-98b7-4173-a060-3564574aeeef to disappear Mar 12 00:40:12.761: INFO: Pod downwardapi-volume-08d209ff-98b7-4173-a060-3564574aeeef no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:40:12.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2035" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":239,"skipped":3842,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:40:12.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4258 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating statefulset ss in namespace statefulset-4258 Mar 12 00:40:12.853: INFO: Found 0 stateful pods, waiting for 1 Mar 12 00:40:22.857: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 12 00:40:22.885: INFO: Deleting all statefulset in ns statefulset-4258 Mar 12 00:40:22.919: INFO: Scaling statefulset ss to 0 Mar 12 00:40:42.972: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 00:40:42.974: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:40:42.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4258" for this suite. • [SLOW TEST:30.226 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":240,"skipped":3844,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:40:42.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:40:47.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-776" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":241,"skipped":3860,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:40:47.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75 Mar 12 00:40:47.138: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the sample API server. Mar 12 00:40:47.893: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 12 00:40:52.814: INFO: Waited 2.775068006s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:40:53.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4236" for this suite. • [SLOW TEST:6.271 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":242,"skipped":3890,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:40:53.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-48aa148d-18e1-4781-b69b-020c31b2535d STEP: Creating secret with name s-test-opt-upd-204f1156-8892-401f-a86d-74322f3a9a00 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-48aa148d-18e1-4781-b69b-020c31b2535d STEP: Updating secret s-test-opt-upd-204f1156-8892-401f-a86d-74322f3a9a00 STEP: Creating secret with name s-test-opt-create-b2c5e2d9-c740-4a05-acd3-10d75b006724 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:41:01.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7511" for this suite. • [SLOW TEST:8.252 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":243,"skipped":3895,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:41:01.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting the proxy server Mar 12 00:41:01.684: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:41:01.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7057" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":280,"completed":244,"skipped":3919,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:41:01.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Mar 12 00:41:01.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7331' Mar 12 00:41:02.262: INFO: stderr: "" Mar 12 00:41:02.262: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 00:41:02.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7331' Mar 12 00:41:02.371: INFO: stderr: "" Mar 12 00:41:02.371: INFO: stdout: "update-demo-nautilus-mwtrx update-demo-nautilus-x9mzv " Mar 12 00:41:02.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mwtrx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:02.467: INFO: stderr: "" Mar 12 00:41:02.467: INFO: stdout: "" Mar 12 00:41:02.467: INFO: update-demo-nautilus-mwtrx is created but not running Mar 12 00:41:07.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7331' Mar 12 00:41:07.538: INFO: stderr: "" Mar 12 00:41:07.538: INFO: stdout: "update-demo-nautilus-mwtrx update-demo-nautilus-x9mzv " Mar 12 00:41:07.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mwtrx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:07.605: INFO: stderr: "" Mar 12 00:41:07.605: INFO: stdout: "true" Mar 12 00:41:07.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mwtrx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:07.668: INFO: stderr: "" Mar 12 00:41:07.668: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 00:41:07.668: INFO: validating pod update-demo-nautilus-mwtrx Mar 12 00:41:07.671: INFO: got data: { "image": "nautilus.jpg" } Mar 12 00:41:07.671: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 00:41:07.671: INFO: update-demo-nautilus-mwtrx is verified up and running Mar 12 00:41:07.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9mzv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:07.738: INFO: stderr: "" Mar 12 00:41:07.738: INFO: stdout: "true" Mar 12 00:41:07.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9mzv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:07.800: INFO: stderr: "" Mar 12 00:41:07.800: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 00:41:07.800: INFO: validating pod update-demo-nautilus-x9mzv Mar 12 00:41:07.802: INFO: got data: { "image": "nautilus.jpg" } Mar 12 00:41:07.802: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 00:41:07.802: INFO: update-demo-nautilus-x9mzv is verified up and running STEP: scaling down the replication controller Mar 12 00:41:07.804: INFO: scanned /root for discovery docs: Mar 12 00:41:07.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7331' Mar 12 00:41:08.912: INFO: stderr: "" Mar 12 00:41:08.912: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 00:41:08.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7331' Mar 12 00:41:09.005: INFO: stderr: "" Mar 12 00:41:09.005: INFO: stdout: "update-demo-nautilus-mwtrx update-demo-nautilus-x9mzv " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 12 00:41:14.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7331' Mar 12 00:41:14.112: INFO: stderr: "" Mar 12 00:41:14.112: INFO: stdout: "update-demo-nautilus-x9mzv " Mar 12 00:41:14.112: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9mzv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:14.192: INFO: stderr: "" Mar 12 00:41:14.192: INFO: stdout: "true" Mar 12 00:41:14.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9mzv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:14.271: INFO: stderr: "" Mar 12 00:41:14.271: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 00:41:14.271: INFO: validating pod update-demo-nautilus-x9mzv Mar 12 00:41:14.273: INFO: got data: { "image": "nautilus.jpg" } Mar 12 00:41:14.273: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 00:41:14.273: INFO: update-demo-nautilus-x9mzv is verified up and running STEP: scaling up the replication controller Mar 12 00:41:14.275: INFO: scanned /root for discovery docs: Mar 12 00:41:14.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7331' Mar 12 00:41:15.363: INFO: stderr: "" Mar 12 00:41:15.363: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 00:41:15.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7331' Mar 12 00:41:15.433: INFO: stderr: "" Mar 12 00:41:15.433: INFO: stdout: "update-demo-nautilus-78lcx update-demo-nautilus-x9mzv " Mar 12 00:41:15.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-78lcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:15.512: INFO: stderr: "" Mar 12 00:41:15.512: INFO: stdout: "" Mar 12 00:41:15.512: INFO: update-demo-nautilus-78lcx is created but not running Mar 12 00:41:20.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7331' Mar 12 00:41:20.606: INFO: stderr: "" Mar 12 00:41:20.606: INFO: stdout: "update-demo-nautilus-78lcx update-demo-nautilus-x9mzv " Mar 12 00:41:20.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-78lcx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:20.689: INFO: stderr: "" Mar 12 00:41:20.689: INFO: stdout: "true" Mar 12 00:41:20.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-78lcx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:20.770: INFO: stderr: "" Mar 12 00:41:20.770: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 00:41:20.770: INFO: validating pod update-demo-nautilus-78lcx Mar 12 00:41:20.774: INFO: got data: { "image": "nautilus.jpg" } Mar 12 00:41:20.774: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 00:41:20.774: INFO: update-demo-nautilus-78lcx is verified up and running Mar 12 00:41:20.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9mzv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:20.862: INFO: stderr: "" Mar 12 00:41:20.862: INFO: stdout: "true" Mar 12 00:41:20.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9mzv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7331' Mar 12 00:41:20.927: INFO: stderr: "" Mar 12 00:41:20.927: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 00:41:20.927: INFO: validating pod update-demo-nautilus-x9mzv Mar 12 00:41:20.929: INFO: got data: { "image": "nautilus.jpg" } Mar 12 00:41:20.929: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 00:41:20.929: INFO: update-demo-nautilus-x9mzv is verified up and running STEP: using delete to clean up resources Mar 12 00:41:20.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7331' Mar 12 00:41:21.009: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 00:41:21.009: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 12 00:41:21.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7331' Mar 12 00:41:21.076: INFO: stderr: "No resources found in kubectl-7331 namespace.\n" Mar 12 00:41:21.076: INFO: stdout: "" Mar 12 00:41:21.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7331 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 00:41:21.139: INFO: stderr: "" Mar 12 00:41:21.139: INFO: stdout: "update-demo-nautilus-78lcx\nupdate-demo-nautilus-x9mzv\n" Mar 12 00:41:21.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7331' Mar 12 00:41:21.762: INFO: stderr: "No resources found in kubectl-7331 namespace.\n" Mar 12 00:41:21.762: INFO: stdout: "" Mar 12 00:41:21.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7331 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 00:41:21.861: INFO: stderr: "" Mar 12 00:41:21.861: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:41:21.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7331" for this suite. • [SLOW TEST:20.051 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":280,"completed":245,"skipped":3952,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:41:21.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:41:21.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7207" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":246,"skipped":4004,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:41:21.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 00:41:22.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5959' Mar 12 00:41:22.103: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 00:41:22.103: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740 Mar 12 00:41:26.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5959' Mar 12 00:41:26.244: INFO: stderr: "" Mar 12 00:41:26.244: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:41:26.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5959" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":280,"completed":247,"skipped":4016,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:41:26.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service nodeport-test with type=NodePort in namespace services-6838 STEP: creating replication controller nodeport-test in namespace services-6838 I0312 00:41:26.427361 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-6838, replica count: 2 I0312 00:41:29.477858 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 00:41:29.477: INFO: Creating new exec pod Mar 12 00:41:32.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6838 execpod7r7wl -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 12 00:41:32.722: INFO: stderr: "I0312 00:41:32.665354 4131 log.go:172] (0xc000bd09a0) (0xc000ae6140) Create stream\nI0312 00:41:32.665412 4131 log.go:172] (0xc000bd09a0) (0xc000ae6140) Stream added, broadcasting: 1\nI0312 00:41:32.667109 4131 log.go:172] (0xc000bd09a0) Reply frame received for 1\nI0312 00:41:32.667149 4131 log.go:172] (0xc000bd09a0) (0xc000aca0a0) Create stream\nI0312 00:41:32.667160 4131 log.go:172] (0xc000bd09a0) (0xc000aca0a0) Stream added, broadcasting: 3\nI0312 00:41:32.667806 4131 log.go:172] (0xc000bd09a0) Reply frame received for 3\nI0312 00:41:32.667826 4131 log.go:172] (0xc000bd09a0) (0xc000ae61e0) Create stream\nI0312 00:41:32.667832 4131 log.go:172] (0xc000bd09a0) (0xc000ae61e0) Stream added, broadcasting: 5\nI0312 00:41:32.668478 4131 log.go:172] (0xc000bd09a0) Reply frame received for 5\nI0312 00:41:32.716655 4131 log.go:172] (0xc000bd09a0) Data frame received for 5\nI0312 00:41:32.716675 4131 log.go:172] (0xc000ae61e0) (5) Data frame handling\nI0312 00:41:32.716682 4131 log.go:172] (0xc000ae61e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0312 00:41:32.717395 4131 log.go:172] (0xc000bd09a0) Data frame received for 5\nI0312 00:41:32.717423 4131 log.go:172] (0xc000ae61e0) (5) Data frame handling\nI0312 00:41:32.717434 4131 log.go:172] (0xc000ae61e0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0312 00:41:32.717888 4131 log.go:172] (0xc000bd09a0) Data frame received for 5\nI0312 00:41:32.717907 4131 log.go:172] (0xc000ae61e0) (5) Data frame handling\nI0312 00:41:32.718041 4131 log.go:172] (0xc000bd09a0) Data frame received for 3\nI0312 00:41:32.718068 4131 log.go:172] (0xc000aca0a0) (3) Data frame handling\nI0312 00:41:32.719457 4131 log.go:172] (0xc000bd09a0) Data frame received for 1\nI0312 00:41:32.719470 4131 log.go:172] (0xc000ae6140) (1) Data frame handling\nI0312 00:41:32.719476 4131 log.go:172] (0xc000ae6140) (1) Data frame sent\nI0312 00:41:32.719484 4131 log.go:172] (0xc000bd09a0) (0xc000ae6140) Stream removed, broadcasting: 1\nI0312 00:41:32.719491 4131 log.go:172] (0xc000bd09a0) Go away received\nI0312 00:41:32.719904 4131 log.go:172] (0xc000bd09a0) (0xc000ae6140) Stream removed, broadcasting: 1\nI0312 00:41:32.719921 4131 log.go:172] (0xc000bd09a0) (0xc000aca0a0) Stream removed, broadcasting: 3\nI0312 00:41:32.719929 4131 log.go:172] (0xc000bd09a0) (0xc000ae61e0) Stream removed, broadcasting: 5\n" Mar 12 00:41:32.722: INFO: stdout: "" Mar 12 00:41:32.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6838 execpod7r7wl -- /bin/sh -x -c nc -zv -t -w 2 10.96.170.85 80' Mar 12 00:41:32.906: INFO: stderr: "I0312 00:41:32.837012 4151 log.go:172] (0xc000b313f0) (0xc0009ec6e0) Create stream\nI0312 00:41:32.837057 4151 log.go:172] (0xc000b313f0) (0xc0009ec6e0) Stream added, broadcasting: 1\nI0312 00:41:32.840747 4151 log.go:172] (0xc000b313f0) Reply frame received for 1\nI0312 00:41:32.840778 4151 log.go:172] (0xc000b313f0) (0xc0006066e0) Create stream\nI0312 00:41:32.840787 4151 log.go:172] (0xc000b313f0) (0xc0006066e0) Stream added, broadcasting: 3\nI0312 00:41:32.841504 4151 log.go:172] (0xc000b313f0) Reply frame received for 3\nI0312 00:41:32.841531 4151 log.go:172] (0xc000b313f0) (0xc000423360) Create stream\nI0312 00:41:32.841541 4151 log.go:172] (0xc000b313f0) (0xc000423360) Stream added, broadcasting: 5\nI0312 00:41:32.842166 4151 log.go:172] (0xc000b313f0) Reply frame received for 5\nI0312 00:41:32.901819 4151 log.go:172] (0xc000b313f0) Data frame received for 5\nI0312 00:41:32.901843 4151 log.go:172] (0xc000423360) (5) Data frame handling\nI0312 00:41:32.901856 4151 log.go:172] (0xc000423360) (5) Data frame sent\nI0312 00:41:32.901864 4151 log.go:172] (0xc000b313f0) Data frame received for 5\nI0312 00:41:32.901870 4151 log.go:172] (0xc000423360) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.170.85 80\nConnection to 10.96.170.85 80 port [tcp/http] succeeded!\nI0312 00:41:32.901924 4151 log.go:172] (0xc000b313f0) Data frame received for 3\nI0312 00:41:32.901937 4151 log.go:172] (0xc0006066e0) (3) Data frame handling\nI0312 00:41:32.903153 4151 log.go:172] (0xc000b313f0) Data frame received for 1\nI0312 00:41:32.903215 4151 log.go:172] (0xc0009ec6e0) (1) Data frame handling\nI0312 00:41:32.903262 4151 log.go:172] (0xc0009ec6e0) (1) Data frame sent\nI0312 00:41:32.903283 4151 log.go:172] (0xc000b313f0) (0xc0009ec6e0) Stream removed, broadcasting: 1\nI0312 00:41:32.903298 4151 log.go:172] (0xc000b313f0) Go away received\nI0312 00:41:32.903577 4151 log.go:172] (0xc000b313f0) (0xc0009ec6e0) Stream removed, broadcasting: 1\nI0312 00:41:32.903592 4151 log.go:172] (0xc000b313f0) (0xc0006066e0) Stream removed, broadcasting: 3\nI0312 00:41:32.903598 4151 log.go:172] (0xc000b313f0) (0xc000423360) Stream removed, broadcasting: 5\n" Mar 12 00:41:32.906: INFO: stdout: "" Mar 12 00:41:32.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6838 execpod7r7wl -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.16 31482' Mar 12 00:41:33.081: INFO: stderr: "I0312 00:41:33.015749 4171 log.go:172] (0xc000a080b0) (0xc000709ea0) Create stream\nI0312 00:41:33.015790 4171 log.go:172] (0xc000a080b0) (0xc000709ea0) Stream added, broadcasting: 1\nI0312 00:41:33.017785 4171 log.go:172] (0xc000a080b0) Reply frame received for 1\nI0312 00:41:33.017814 4171 log.go:172] (0xc000a080b0) (0xc0006aa820) Create stream\nI0312 00:41:33.017825 4171 log.go:172] (0xc000a080b0) (0xc0006aa820) Stream added, broadcasting: 3\nI0312 00:41:33.018532 4171 log.go:172] (0xc000a080b0) Reply frame received for 3\nI0312 00:41:33.018553 4171 log.go:172] (0xc000a080b0) (0xc000709f40) Create stream\nI0312 00:41:33.018558 4171 log.go:172] (0xc000a080b0) (0xc000709f40) Stream added, broadcasting: 5\nI0312 00:41:33.019509 4171 log.go:172] (0xc000a080b0) Reply frame received for 5\nI0312 00:41:33.077105 4171 log.go:172] (0xc000a080b0) Data frame received for 3\nI0312 00:41:33.077137 4171 log.go:172] (0xc0006aa820) (3) Data frame handling\nI0312 00:41:33.077155 4171 log.go:172] (0xc000a080b0) Data frame received for 5\nI0312 00:41:33.077160 4171 log.go:172] (0xc000709f40) (5) Data frame handling\nI0312 00:41:33.077177 4171 log.go:172] (0xc000709f40) (5) Data frame sent\nI0312 00:41:33.077184 4171 log.go:172] (0xc000a080b0) Data frame received for 5\nI0312 00:41:33.077189 4171 log.go:172] (0xc000709f40) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.16 31482\nConnection to 172.17.0.16 31482 port [tcp/31482] succeeded!\nI0312 00:41:33.078220 4171 log.go:172] (0xc000a080b0) Data frame received for 1\nI0312 00:41:33.078238 4171 log.go:172] (0xc000709ea0) (1) Data frame handling\nI0312 00:41:33.078247 4171 log.go:172] (0xc000709ea0) (1) Data frame sent\nI0312 00:41:33.078260 4171 log.go:172] (0xc000a080b0) (0xc000709ea0) Stream removed, broadcasting: 1\nI0312 00:41:33.078271 4171 log.go:172] (0xc000a080b0) Go away received\nI0312 00:41:33.078675 4171 log.go:172] (0xc000a080b0) (0xc000709ea0) Stream removed, broadcasting: 1\nI0312 00:41:33.078693 4171 log.go:172] (0xc000a080b0) (0xc0006aa820) Stream removed, broadcasting: 3\nI0312 00:41:33.078702 4171 log.go:172] (0xc000a080b0) (0xc000709f40) Stream removed, broadcasting: 5\n" Mar 12 00:41:33.082: INFO: stdout: "" Mar 12 00:41:33.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6838 execpod7r7wl -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31482' Mar 12 00:41:33.275: INFO: stderr: "I0312 00:41:33.193021 4193 log.go:172] (0xc00003ab00) (0xc0008d4000) Create stream\nI0312 00:41:33.193062 4193 log.go:172] (0xc00003ab00) (0xc0008d4000) Stream added, broadcasting: 1\nI0312 00:41:33.195136 4193 log.go:172] (0xc00003ab00) Reply frame received for 1\nI0312 00:41:33.195167 4193 log.go:172] (0xc00003ab00) (0xc000633b80) Create stream\nI0312 00:41:33.195178 4193 log.go:172] (0xc00003ab00) (0xc000633b80) Stream added, broadcasting: 3\nI0312 00:41:33.195833 4193 log.go:172] (0xc00003ab00) Reply frame received for 3\nI0312 00:41:33.195866 4193 log.go:172] (0xc00003ab00) (0xc000140000) Create stream\nI0312 00:41:33.195878 4193 log.go:172] (0xc00003ab00) (0xc000140000) Stream added, broadcasting: 5\nI0312 00:41:33.196763 4193 log.go:172] (0xc00003ab00) Reply frame received for 5\nI0312 00:41:33.268360 4193 log.go:172] (0xc00003ab00) Data frame received for 3\nI0312 00:41:33.268387 4193 log.go:172] (0xc000633b80) (3) Data frame handling\nI0312 00:41:33.268404 4193 log.go:172] (0xc00003ab00) Data frame received for 5\nI0312 00:41:33.268412 4193 log.go:172] (0xc000140000) (5) Data frame handling\nI0312 00:41:33.268421 4193 log.go:172] (0xc000140000) (5) Data frame sent\nI0312 00:41:33.268428 4193 log.go:172] (0xc00003ab00) Data frame received for 5\nI0312 00:41:33.268433 4193 log.go:172] (0xc000140000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31482\nConnection to 172.17.0.18 31482 port [tcp/31482] succeeded!\nI0312 00:41:33.270450 4193 log.go:172] (0xc00003ab00) Data frame received for 1\nI0312 00:41:33.270469 4193 log.go:172] (0xc0008d4000) (1) Data frame handling\nI0312 00:41:33.270481 4193 log.go:172] (0xc0008d4000) (1) Data frame sent\nI0312 00:41:33.271118 4193 log.go:172] (0xc00003ab00) (0xc0008d4000) Stream removed, broadcasting: 1\nI0312 00:41:33.271137 4193 log.go:172] (0xc00003ab00) Go away received\nI0312 00:41:33.271397 4193 log.go:172] (0xc00003ab00) (0xc0008d4000) Stream removed, broadcasting: 1\nI0312 00:41:33.271411 4193 log.go:172] (0xc00003ab00) (0xc000633b80) Stream removed, broadcasting: 3\nI0312 00:41:33.271418 4193 log.go:172] (0xc00003ab00) (0xc000140000) Stream removed, broadcasting: 5\n" Mar 12 00:41:33.275: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:41:33.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6838" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:6.998 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":248,"skipped":4019,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:41:33.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:41:44.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2139" for this suite. • [SLOW TEST:11.117 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":249,"skipped":4036,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:41:44.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:41:46.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5772" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":250,"skipped":4037,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:41:46.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:41:48.583: INFO: Waiting up to 5m0s for pod "client-envvars-aface512-f8d8-4e52-91ee-5c9fc0526ecd" in namespace "pods-5252" to be "success or failure" Mar 12 00:41:48.589: INFO: Pod "client-envvars-aface512-f8d8-4e52-91ee-5c9fc0526ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.994076ms Mar 12 00:41:50.591: INFO: Pod "client-envvars-aface512-f8d8-4e52-91ee-5c9fc0526ecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008206396s STEP: Saw pod success Mar 12 00:41:50.592: INFO: Pod "client-envvars-aface512-f8d8-4e52-91ee-5c9fc0526ecd" satisfied condition "success or failure" Mar 12 00:41:50.593: INFO: Trying to get logs from node latest-worker2 pod client-envvars-aface512-f8d8-4e52-91ee-5c9fc0526ecd container env3cont: STEP: delete the pod Mar 12 00:41:50.613: INFO: Waiting for pod client-envvars-aface512-f8d8-4e52-91ee-5c9fc0526ecd to disappear Mar 12 00:41:50.619: INFO: Pod client-envvars-aface512-f8d8-4e52-91ee-5c9fc0526ecd no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:41:50.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5252" for this suite. •{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":251,"skipped":4050,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:41:50.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 12 00:41:56.739: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 00:41:56.758: INFO: Pod pod-with-prestop-exec-hook still exists Mar 12 00:41:58.759: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 00:41:58.761: INFO: Pod pod-with-prestop-exec-hook still exists Mar 12 00:42:00.759: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 00:42:00.762: INFO: Pod pod-with-prestop-exec-hook still exists Mar 12 00:42:02.759: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 00:42:02.763: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:42:02.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6943" for this suite. • [SLOW TEST:12.160 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":252,"skipped":4100,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:42:02.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:42:02.916: INFO: Create a RollingUpdate DaemonSet Mar 12 00:42:02.919: INFO: Check that daemon pods launch on every node of the cluster Mar 12 00:42:02.947: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:42:02.950: INFO: Number of nodes with available pods: 0 Mar 12 00:42:02.950: INFO: Node latest-worker is running more than one daemon pod Mar 12 00:42:03.953: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:42:03.956: INFO: Number of nodes with available pods: 0 Mar 12 00:42:03.956: INFO: Node latest-worker is running more than one daemon pod Mar 12 00:42:04.953: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:42:04.955: INFO: Number of nodes with available pods: 1 Mar 12 00:42:04.955: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 00:42:05.954: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:42:05.957: INFO: Number of nodes with available pods: 2 Mar 12 00:42:05.957: INFO: Number of running nodes: 2, number of available pods: 2 Mar 12 00:42:05.957: INFO: Update the DaemonSet to trigger a rollout Mar 12 00:42:05.962: INFO: Updating DaemonSet daemon-set Mar 12 00:42:12.976: INFO: Roll back the DaemonSet before rollout is complete Mar 12 00:42:12.981: INFO: Updating DaemonSet daemon-set Mar 12 00:42:12.981: INFO: Make sure DaemonSet rollback is complete Mar 12 00:42:12.991: INFO: Wrong image for pod: daemon-set-5dshx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 12 00:42:12.991: INFO: Pod daemon-set-5dshx is not available Mar 12 00:42:13.013: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:42:14.017: INFO: Wrong image for pod: daemon-set-5dshx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 12 00:42:14.017: INFO: Pod daemon-set-5dshx is not available Mar 12 00:42:14.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 00:42:15.017: INFO: Pod daemon-set-g5qbk is not available Mar 12 00:42:15.020: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-730, will wait for the garbage collector to delete the pods Mar 12 00:42:15.089: INFO: Deleting DaemonSet.extensions daemon-set took: 12.193696ms Mar 12 00:42:15.389: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.202546ms Mar 12 00:42:22.192: INFO: Number of nodes with available pods: 0 Mar 12 00:42:22.192: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 00:42:22.195: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-730/daemonsets","resourceVersion":"949687"},"items":null} Mar 12 00:42:22.197: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-730/pods","resourceVersion":"949687"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:42:22.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-730" for this suite. • [SLOW TEST:19.448 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":253,"skipped":4103,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:42:22.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 12 00:42:22.276: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-a 9dd00464-7758-4fdd-aad0-1186e015964d 949693 0 2020-03-12 00:42:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:42:22.277: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-a 9dd00464-7758-4fdd-aad0-1186e015964d 949693 0 2020-03-12 00:42:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 12 00:42:32.281: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-a 9dd00464-7758-4fdd-aad0-1186e015964d 949751 0 2020-03-12 00:42:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:42:32.281: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-a 9dd00464-7758-4fdd-aad0-1186e015964d 949751 0 2020-03-12 00:42:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 12 00:42:42.291: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-a 9dd00464-7758-4fdd-aad0-1186e015964d 949781 0 2020-03-12 00:42:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:42:42.291: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-a 9dd00464-7758-4fdd-aad0-1186e015964d 949781 0 2020-03-12 00:42:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 12 00:42:52.296: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-a 9dd00464-7758-4fdd-aad0-1186e015964d 949809 0 2020-03-12 00:42:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:42:52.296: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-a 9dd00464-7758-4fdd-aad0-1186e015964d 949809 0 2020-03-12 00:42:22 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 12 00:43:02.302: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-b 9e1d9036-2e14-4b23-8784-3d763a6910e6 949839 0 2020-03-12 00:43:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:43:02.302: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-b 9e1d9036-2e14-4b23-8784-3d763a6910e6 949839 0 2020-03-12 00:43:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 12 00:43:12.307: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-b 9e1d9036-2e14-4b23-8784-3d763a6910e6 949869 0 2020-03-12 00:43:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:43:12.307: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3794 /api/v1/namespaces/watch-3794/configmaps/e2e-watch-test-configmap-b 9e1d9036-2e14-4b23-8784-3d763a6910e6 949869 0 2020-03-12 00:43:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:43:22.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3794" for this suite. • [SLOW TEST:60.085 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":254,"skipped":4127,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:43:22.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-60c8a9cf-a1d9-449a-8ddf-e4d83de59b0d in namespace container-probe-5886 Mar 12 00:43:24.441: INFO: Started pod busybox-60c8a9cf-a1d9-449a-8ddf-e4d83de59b0d in namespace container-probe-5886 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 00:43:24.444: INFO: Initial restart count of pod busybox-60c8a9cf-a1d9-449a-8ddf-e4d83de59b0d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:47:25.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5886" for this suite. • [SLOW TEST:243.174 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":255,"skipped":4127,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:47:25.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-projected-all-test-volume-631f55d2-67cc-4d17-a638-24889f2af1b1 STEP: Creating secret with name secret-projected-all-test-volume-952a7e3e-f218-430d-bc86-028f5b2b1183 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 12 00:47:25.565: INFO: Waiting up to 5m0s for pod "projected-volume-6549a895-1b96-4999-a479-e935017a6b94" in namespace "projected-2170" to be "success or failure" Mar 12 00:47:25.570: INFO: Pod "projected-volume-6549a895-1b96-4999-a479-e935017a6b94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362947ms Mar 12 00:47:27.573: INFO: Pod "projected-volume-6549a895-1b96-4999-a479-e935017a6b94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007862429s STEP: Saw pod success Mar 12 00:47:27.573: INFO: Pod "projected-volume-6549a895-1b96-4999-a479-e935017a6b94" satisfied condition "success or failure" Mar 12 00:47:27.576: INFO: Trying to get logs from node latest-worker pod projected-volume-6549a895-1b96-4999-a479-e935017a6b94 container projected-all-volume-test: STEP: delete the pod Mar 12 00:47:27.613: INFO: Waiting for pod projected-volume-6549a895-1b96-4999-a479-e935017a6b94 to disappear Mar 12 00:47:27.618: INFO: Pod projected-volume-6549a895-1b96-4999-a479-e935017a6b94 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:47:27.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2170" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":256,"skipped":4171,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:47:27.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test hostPath mode Mar 12 00:47:27.696: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2457" to be "success or failure" Mar 12 00:47:27.702: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.5761ms Mar 12 00:47:29.705: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009346234s STEP: Saw pod success Mar 12 00:47:29.705: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 12 00:47:29.708: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 12 00:47:29.758: INFO: Waiting for pod pod-host-path-test to disappear Mar 12 00:47:29.763: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:47:29.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2457" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":257,"skipped":4171,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:47:29.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token Mar 12 00:47:30.339: INFO: created pod pod-service-account-defaultsa Mar 12 00:47:30.339: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 12 00:47:30.344: INFO: created pod pod-service-account-mountsa Mar 12 00:47:30.344: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 12 00:47:30.362: INFO: created pod pod-service-account-nomountsa Mar 12 00:47:30.362: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 12 00:47:30.374: INFO: created pod pod-service-account-defaultsa-mountspec Mar 12 00:47:30.374: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 12 00:47:30.391: INFO: created pod pod-service-account-mountsa-mountspec Mar 12 00:47:30.391: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 12 00:47:30.439: INFO: created pod pod-service-account-nomountsa-mountspec Mar 12 00:47:30.439: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 12 00:47:30.459: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 12 00:47:30.459: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 12 00:47:30.466: INFO: created pod pod-service-account-mountsa-nomountspec Mar 12 00:47:30.466: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 12 00:47:30.487: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 12 00:47:30.487: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:47:30.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5696" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":280,"completed":258,"skipped":4189,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:47:30.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 12 00:47:38.889: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4983 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:47:38.889: INFO: >>> kubeConfig: /root/.kube/config I0312 00:47:38.930113 7 log.go:172] (0xc001df4420) (0xc001df7ea0) Create stream I0312 00:47:38.930211 7 log.go:172] (0xc001df4420) (0xc001df7ea0) Stream added, broadcasting: 1 I0312 00:47:38.933582 7 log.go:172] (0xc001df4420) Reply frame received for 1 I0312 00:47:38.933648 7 log.go:172] (0xc001df4420) (0xc002a6efa0) Create stream I0312 00:47:38.933671 7 log.go:172] (0xc001df4420) (0xc002a6efa0) Stream added, broadcasting: 3 I0312 00:47:38.934750 7 log.go:172] (0xc001df4420) Reply frame received for 3 I0312 00:47:38.934792 7 log.go:172] (0xc001df4420) (0xc0029ba960) Create stream I0312 00:47:38.934808 7 log.go:172] (0xc001df4420) (0xc0029ba960) Stream added, broadcasting: 5 I0312 00:47:38.935981 7 log.go:172] (0xc001df4420) Reply frame received for 5 I0312 00:47:39.010536 7 log.go:172] (0xc001df4420) Data frame received for 5 I0312 00:47:39.010575 7 log.go:172] (0xc0029ba960) (5) Data frame handling I0312 00:47:39.010595 7 log.go:172] (0xc001df4420) Data frame received for 3 I0312 00:47:39.010606 7 log.go:172] (0xc002a6efa0) (3) Data frame handling I0312 00:47:39.010620 7 log.go:172] (0xc002a6efa0) (3) Data frame sent I0312 00:47:39.010635 7 log.go:172] (0xc001df4420) Data frame received for 3 I0312 00:47:39.010650 7 log.go:172] (0xc002a6efa0) (3) Data frame handling I0312 00:47:39.012318 7 log.go:172] (0xc001df4420) Data frame received for 1 I0312 00:47:39.012373 7 log.go:172] (0xc001df7ea0) (1) Data frame handling I0312 00:47:39.012403 7 log.go:172] (0xc001df7ea0) (1) Data frame sent I0312 00:47:39.012427 7 log.go:172] (0xc001df4420) (0xc001df7ea0) Stream removed, broadcasting: 1 I0312 00:47:39.012456 7 log.go:172] (0xc001df4420) Go away received I0312 00:47:39.012649 7 log.go:172] (0xc001df4420) (0xc001df7ea0) Stream removed, broadcasting: 1 I0312 00:47:39.012693 7 log.go:172] (0xc001df4420) (0xc002a6efa0) Stream removed, broadcasting: 3 I0312 00:47:39.012713 7 log.go:172] (0xc001df4420) (0xc0029ba960) Stream removed, broadcasting: 5 Mar 12 00:47:39.012: INFO: Exec stderr: "" Mar 12 00:47:39.012: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4983 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:47:39.012: INFO: >>> kubeConfig: /root/.kube/config I0312 00:47:39.047103 7 log.go:172] (0xc001df46e0) (0xc001df7f40) Create stream I0312 00:47:39.047140 7 log.go:172] (0xc001df46e0) (0xc001df7f40) Stream added, broadcasting: 1 I0312 00:47:39.050209 7 log.go:172] (0xc001df46e0) Reply frame received for 1 I0312 00:47:39.050277 7 log.go:172] (0xc001df46e0) (0xc0016a6320) Create stream I0312 00:47:39.050295 7 log.go:172] (0xc001df46e0) (0xc0016a6320) Stream added, broadcasting: 3 I0312 00:47:39.051331 7 log.go:172] (0xc001df46e0) Reply frame received for 3 I0312 00:47:39.051403 7 log.go:172] (0xc001df46e0) (0xc001e72000) Create stream I0312 00:47:39.051423 7 log.go:172] (0xc001df46e0) (0xc001e72000) Stream added, broadcasting: 5 I0312 00:47:39.052597 7 log.go:172] (0xc001df46e0) Reply frame received for 5 I0312 00:47:39.117659 7 log.go:172] (0xc001df46e0) Data frame received for 3 I0312 00:47:39.117698 7 log.go:172] (0xc0016a6320) (3) Data frame handling I0312 00:47:39.117731 7 log.go:172] (0xc0016a6320) (3) Data frame sent I0312 00:47:39.117752 7 log.go:172] (0xc001df46e0) Data frame received for 3 I0312 00:47:39.117798 7 log.go:172] (0xc0016a6320) (3) Data frame handling I0312 00:47:39.117839 7 log.go:172] (0xc001df46e0) Data frame received for 5 I0312 00:47:39.117871 7 log.go:172] (0xc001e72000) (5) Data frame handling I0312 00:47:39.119265 7 log.go:172] (0xc001df46e0) Data frame received for 1 I0312 00:47:39.119282 7 log.go:172] (0xc001df7f40) (1) Data frame handling I0312 00:47:39.119294 7 log.go:172] (0xc001df7f40) (1) Data frame sent I0312 00:47:39.119308 7 log.go:172] (0xc001df46e0) (0xc001df7f40) Stream removed, broadcasting: 1 I0312 00:47:39.119321 7 log.go:172] (0xc001df46e0) Go away received I0312 00:47:39.119441 7 log.go:172] (0xc001df46e0) (0xc001df7f40) Stream removed, broadcasting: 1 I0312 00:47:39.119465 7 log.go:172] (0xc001df46e0) (0xc0016a6320) Stream removed, broadcasting: 3 I0312 00:47:39.119498 7 log.go:172] (0xc001df46e0) (0xc001e72000) Stream removed, broadcasting: 5 Mar 12 00:47:39.119: INFO: Exec stderr: "" Mar 12 00:47:39.119: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4983 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:47:39.119: INFO: >>> kubeConfig: /root/.kube/config I0312 00:47:39.147977 7 log.go:172] (0xc001d36420) (0xc001e72780) Create stream I0312 00:47:39.148014 7 log.go:172] (0xc001d36420) (0xc001e72780) Stream added, broadcasting: 1 I0312 00:47:39.150353 7 log.go:172] (0xc001d36420) Reply frame received for 1 I0312 00:47:39.150386 7 log.go:172] (0xc001d36420) (0xc0029bad20) Create stream I0312 00:47:39.150397 7 log.go:172] (0xc001d36420) (0xc0029bad20) Stream added, broadcasting: 3 I0312 00:47:39.151117 7 log.go:172] (0xc001d36420) Reply frame received for 3 I0312 00:47:39.151144 7 log.go:172] (0xc001d36420) (0xc0016a63c0) Create stream I0312 00:47:39.151154 7 log.go:172] (0xc001d36420) (0xc0016a63c0) Stream added, broadcasting: 5 I0312 00:47:39.151768 7 log.go:172] (0xc001d36420) Reply frame received for 5 I0312 00:47:39.196684 7 log.go:172] (0xc001d36420) Data frame received for 5 I0312 00:47:39.196723 7 log.go:172] (0xc0016a63c0) (5) Data frame handling I0312 00:47:39.196744 7 log.go:172] (0xc001d36420) Data frame received for 3 I0312 00:47:39.196752 7 log.go:172] (0xc0029bad20) (3) Data frame handling I0312 00:47:39.196770 7 log.go:172] (0xc0029bad20) (3) Data frame sent I0312 00:47:39.196778 7 log.go:172] (0xc001d36420) Data frame received for 3 I0312 00:47:39.196784 7 log.go:172] (0xc0029bad20) (3) Data frame handling I0312 00:47:39.197941 7 log.go:172] (0xc001d36420) Data frame received for 1 I0312 00:47:39.197957 7 log.go:172] (0xc001e72780) (1) Data frame handling I0312 00:47:39.197970 7 log.go:172] (0xc001e72780) (1) Data frame sent I0312 00:47:39.197988 7 log.go:172] (0xc001d36420) (0xc001e72780) Stream removed, broadcasting: 1 I0312 00:47:39.198013 7 log.go:172] (0xc001d36420) Go away received I0312 00:47:39.198190 7 log.go:172] (0xc001d36420) (0xc001e72780) Stream removed, broadcasting: 1 I0312 00:47:39.198216 7 log.go:172] (0xc001d36420) (0xc0029bad20) Stream removed, broadcasting: 3 I0312 00:47:39.198226 7 log.go:172] (0xc001d36420) (0xc0016a63c0) Stream removed, broadcasting: 5 Mar 12 00:47:39.198: INFO: Exec stderr: "" Mar 12 00:47:39.198: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4983 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:47:39.198: INFO: >>> kubeConfig: /root/.kube/config I0312 00:47:39.225261 7 log.go:172] (0xc002a25340) (0xc0016a68c0) Create stream I0312 00:47:39.225315 7 log.go:172] (0xc002a25340) (0xc0016a68c0) Stream added, broadcasting: 1 I0312 00:47:39.228286 7 log.go:172] (0xc002a25340) Reply frame received for 1 I0312 00:47:39.228324 7 log.go:172] (0xc002a25340) (0xc0016a6960) Create stream I0312 00:47:39.228341 7 log.go:172] (0xc002a25340) (0xc0016a6960) Stream added, broadcasting: 3 I0312 00:47:39.233645 7 log.go:172] (0xc002a25340) Reply frame received for 3 I0312 00:47:39.233677 7 log.go:172] (0xc002a25340) (0xc001a30000) Create stream I0312 00:47:39.233687 7 log.go:172] (0xc002a25340) (0xc001a30000) Stream added, broadcasting: 5 I0312 00:47:39.234492 7 log.go:172] (0xc002a25340) Reply frame received for 5 I0312 00:47:39.296702 7 log.go:172] (0xc002a25340) Data frame received for 5 I0312 00:47:39.296733 7 log.go:172] (0xc001a30000) (5) Data frame handling I0312 00:47:39.296768 7 log.go:172] (0xc002a25340) Data frame received for 3 I0312 00:47:39.296778 7 log.go:172] (0xc0016a6960) (3) Data frame handling I0312 00:47:39.296789 7 log.go:172] (0xc0016a6960) (3) Data frame sent I0312 00:47:39.296805 7 log.go:172] (0xc002a25340) Data frame received for 3 I0312 00:47:39.296816 7 log.go:172] (0xc0016a6960) (3) Data frame handling I0312 00:47:39.297347 7 log.go:172] (0xc002a25340) Data frame received for 1 I0312 00:47:39.297366 7 log.go:172] (0xc0016a68c0) (1) Data frame handling I0312 00:47:39.297385 7 log.go:172] (0xc0016a68c0) (1) Data frame sent I0312 00:47:39.297396 7 log.go:172] (0xc002a25340) (0xc0016a68c0) Stream removed, broadcasting: 1 I0312 00:47:39.297412 7 log.go:172] (0xc002a25340) Go away received I0312 00:47:39.297516 7 log.go:172] (0xc002a25340) (0xc0016a68c0) Stream removed, broadcasting: 1 I0312 00:47:39.297558 7 log.go:172] (0xc002a25340) (0xc0016a6960) Stream removed, broadcasting: 3 I0312 00:47:39.297597 7 log.go:172] (0xc002a25340) (0xc001a30000) Stream removed, broadcasting: 5 Mar 12 00:47:39.297: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 12 00:47:39.297: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4983 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:47:39.297: INFO: >>> kubeConfig: /root/.kube/config I0312 00:47:39.313971 7 log.go:172] (0xc001df4d10) (0xc001a303c0) Create stream I0312 00:47:39.313985 7 log.go:172] (0xc001df4d10) (0xc001a303c0) Stream added, broadcasting: 1 I0312 00:47:39.315222 7 log.go:172] (0xc001df4d10) Reply frame received for 1 I0312 00:47:39.315243 7 log.go:172] (0xc001df4d10) (0xc0029badc0) Create stream I0312 00:47:39.315250 7 log.go:172] (0xc001df4d10) (0xc0029badc0) Stream added, broadcasting: 3 I0312 00:47:39.315667 7 log.go:172] (0xc001df4d10) Reply frame received for 3 I0312 00:47:39.315685 7 log.go:172] (0xc001df4d10) (0xc0029bae60) Create stream I0312 00:47:39.315694 7 log.go:172] (0xc001df4d10) (0xc0029bae60) Stream added, broadcasting: 5 I0312 00:47:39.316209 7 log.go:172] (0xc001df4d10) Reply frame received for 5 I0312 00:47:39.374888 7 log.go:172] (0xc001df4d10) Data frame received for 5 I0312 00:47:39.374911 7 log.go:172] (0xc0029bae60) (5) Data frame handling I0312 00:47:39.374926 7 log.go:172] (0xc001df4d10) Data frame received for 3 I0312 00:47:39.374934 7 log.go:172] (0xc0029badc0) (3) Data frame handling I0312 00:47:39.374946 7 log.go:172] (0xc0029badc0) (3) Data frame sent I0312 00:47:39.374953 7 log.go:172] (0xc001df4d10) Data frame received for 3 I0312 00:47:39.374959 7 log.go:172] (0xc0029badc0) (3) Data frame handling I0312 00:47:39.375701 7 log.go:172] (0xc001df4d10) Data frame received for 1 I0312 00:47:39.375712 7 log.go:172] (0xc001a303c0) (1) Data frame handling I0312 00:47:39.375725 7 log.go:172] (0xc001a303c0) (1) Data frame sent I0312 00:47:39.375736 7 log.go:172] (0xc001df4d10) (0xc001a303c0) Stream removed, broadcasting: 1 I0312 00:47:39.375777 7 log.go:172] (0xc001df4d10) Go away received I0312 00:47:39.375803 7 log.go:172] (0xc001df4d10) (0xc001a303c0) Stream removed, broadcasting: 1 I0312 00:47:39.375813 7 log.go:172] (0xc001df4d10) (0xc0029badc0) Stream removed, broadcasting: 3 I0312 00:47:39.375824 7 log.go:172] (0xc001df4d10) (0xc0029bae60) Stream removed, broadcasting: 5 Mar 12 00:47:39.375: INFO: Exec stderr: "" Mar 12 00:47:39.375: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4983 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:47:39.375: INFO: >>> kubeConfig: /root/.kube/config I0312 00:47:39.395066 7 log.go:172] (0xc001df5340) (0xc001a30aa0) Create stream I0312 00:47:39.395082 7 log.go:172] (0xc001df5340) (0xc001a30aa0) Stream added, broadcasting: 1 I0312 00:47:39.396592 7 log.go:172] (0xc001df5340) Reply frame received for 1 I0312 00:47:39.396614 7 log.go:172] (0xc001df5340) (0xc001a30c80) Create stream I0312 00:47:39.396622 7 log.go:172] (0xc001df5340) (0xc001a30c80) Stream added, broadcasting: 3 I0312 00:47:39.397188 7 log.go:172] (0xc001df5340) Reply frame received for 3 I0312 00:47:39.397211 7 log.go:172] (0xc001df5340) (0xc001e72960) Create stream I0312 00:47:39.397224 7 log.go:172] (0xc001df5340) (0xc001e72960) Stream added, broadcasting: 5 I0312 00:47:39.397779 7 log.go:172] (0xc001df5340) Reply frame received for 5 I0312 00:47:39.468612 7 log.go:172] (0xc001df5340) Data frame received for 3 I0312 00:47:39.468632 7 log.go:172] (0xc001a30c80) (3) Data frame handling I0312 00:47:39.468652 7 log.go:172] (0xc001a30c80) (3) Data frame sent I0312 00:47:39.468661 7 log.go:172] (0xc001df5340) Data frame received for 3 I0312 00:47:39.468670 7 log.go:172] (0xc001a30c80) (3) Data frame handling I0312 00:47:39.468871 7 log.go:172] (0xc001df5340) Data frame received for 5 I0312 00:47:39.468886 7 log.go:172] (0xc001e72960) (5) Data frame handling I0312 00:47:39.470561 7 log.go:172] (0xc001df5340) Data frame received for 1 I0312 00:47:39.470582 7 log.go:172] (0xc001a30aa0) (1) Data frame handling I0312 00:47:39.470603 7 log.go:172] (0xc001a30aa0) (1) Data frame sent I0312 00:47:39.470643 7 log.go:172] (0xc001df5340) (0xc001a30aa0) Stream removed, broadcasting: 1 I0312 00:47:39.470696 7 log.go:172] (0xc001df5340) Go away received I0312 00:47:39.470773 7 log.go:172] (0xc001df5340) (0xc001a30aa0) Stream removed, broadcasting: 1 I0312 00:47:39.470804 7 log.go:172] (0xc001df5340) (0xc001a30c80) Stream removed, broadcasting: 3 I0312 00:47:39.470843 7 log.go:172] (0xc001df5340) (0xc001e72960) Stream removed, broadcasting: 5 Mar 12 00:47:39.470: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 12 00:47:39.470: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4983 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:47:39.470: INFO: >>> kubeConfig: /root/.kube/config I0312 00:47:39.498944 7 log.go:172] (0xc002a25b80) (0xc0016a6fa0) Create stream I0312 00:47:39.498979 7 log.go:172] (0xc002a25b80) (0xc0016a6fa0) Stream added, broadcasting: 1 I0312 00:47:39.501405 7 log.go:172] (0xc002a25b80) Reply frame received for 1 I0312 00:47:39.501442 7 log.go:172] (0xc002a25b80) (0xc0016a7180) Create stream I0312 00:47:39.501455 7 log.go:172] (0xc002a25b80) (0xc0016a7180) Stream added, broadcasting: 3 I0312 00:47:39.502450 7 log.go:172] (0xc002a25b80) Reply frame received for 3 I0312 00:47:39.502492 7 log.go:172] (0xc002a25b80) (0xc001e72c80) Create stream I0312 00:47:39.502505 7 log.go:172] (0xc002a25b80) (0xc001e72c80) Stream added, broadcasting: 5 I0312 00:47:39.503421 7 log.go:172] (0xc002a25b80) Reply frame received for 5 I0312 00:47:39.568962 7 log.go:172] (0xc002a25b80) Data frame received for 5 I0312 00:47:39.569002 7 log.go:172] (0xc002a25b80) Data frame received for 3 I0312 00:47:39.569052 7 log.go:172] (0xc0016a7180) (3) Data frame handling I0312 00:47:39.569069 7 log.go:172] (0xc0016a7180) (3) Data frame sent I0312 00:47:39.569075 7 log.go:172] (0xc002a25b80) Data frame received for 3 I0312 00:47:39.569082 7 log.go:172] (0xc0016a7180) (3) Data frame handling I0312 00:47:39.569104 7 log.go:172] (0xc001e72c80) (5) Data frame handling I0312 00:47:39.570223 7 log.go:172] (0xc002a25b80) Data frame received for 1 I0312 00:47:39.570237 7 log.go:172] (0xc0016a6fa0) (1) Data frame handling I0312 00:47:39.570245 7 log.go:172] (0xc0016a6fa0) (1) Data frame sent I0312 00:47:39.570258 7 log.go:172] (0xc002a25b80) (0xc0016a6fa0) Stream removed, broadcasting: 1 I0312 00:47:39.570271 7 log.go:172] (0xc002a25b80) Go away received I0312 00:47:39.570363 7 log.go:172] (0xc002a25b80) (0xc0016a6fa0) Stream removed, broadcasting: 1 I0312 00:47:39.570385 7 log.go:172] (0xc002a25b80) (0xc0016a7180) Stream removed, broadcasting: 3 I0312 00:47:39.570397 7 log.go:172] (0xc002a25b80) (0xc001e72c80) Stream removed, broadcasting: 5 Mar 12 00:47:39.570: INFO: Exec stderr: "" Mar 12 00:47:39.570: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4983 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:47:39.570: INFO: >>> kubeConfig: /root/.kube/config I0312 00:47:39.597209 7 log.go:172] (0xc0023a0370) (0xc002a6f220) Create stream I0312 00:47:39.597238 7 log.go:172] (0xc0023a0370) (0xc002a6f220) Stream added, broadcasting: 1 I0312 00:47:39.602183 7 log.go:172] (0xc0023a0370) Reply frame received for 1 I0312 00:47:39.602235 7 log.go:172] (0xc0023a0370) (0xc0016a72c0) Create stream I0312 00:47:39.602248 7 log.go:172] (0xc0023a0370) (0xc0016a72c0) Stream added, broadcasting: 3 I0312 00:47:39.604316 7 log.go:172] (0xc0023a0370) Reply frame received for 3 I0312 00:47:39.604349 7 log.go:172] (0xc0023a0370) (0xc001e730e0) Create stream I0312 00:47:39.604359 7 log.go:172] (0xc0023a0370) (0xc001e730e0) Stream added, broadcasting: 5 I0312 00:47:39.605158 7 log.go:172] (0xc0023a0370) Reply frame received for 5 I0312 00:47:39.668857 7 log.go:172] (0xc0023a0370) Data frame received for 5 I0312 00:47:39.668885 7 log.go:172] (0xc001e730e0) (5) Data frame handling I0312 00:47:39.668913 7 log.go:172] (0xc0023a0370) Data frame received for 3 I0312 00:47:39.668925 7 log.go:172] (0xc0016a72c0) (3) Data frame handling I0312 00:47:39.668947 7 log.go:172] (0xc0016a72c0) (3) Data frame sent I0312 00:47:39.668958 7 log.go:172] (0xc0023a0370) Data frame received for 3 I0312 00:47:39.668972 7 log.go:172] (0xc0016a72c0) (3) Data frame handling I0312 00:47:39.669892 7 log.go:172] (0xc0023a0370) Data frame received for 1 I0312 00:47:39.669912 7 log.go:172] (0xc002a6f220) (1) Data frame handling I0312 00:47:39.669924 7 log.go:172] (0xc002a6f220) (1) Data frame sent I0312 00:47:39.669941 7 log.go:172] (0xc0023a0370) (0xc002a6f220) Stream removed, broadcasting: 1 I0312 00:47:39.669954 7 log.go:172] (0xc0023a0370) Go away received I0312 00:47:39.670082 7 log.go:172] (0xc0023a0370) (0xc002a6f220) Stream removed, broadcasting: 1 I0312 00:47:39.670110 7 log.go:172] (0xc0023a0370) (0xc0016a72c0) Stream removed, broadcasting: 3 I0312 00:47:39.670153 7 log.go:172] (0xc0023a0370) (0xc001e730e0) Stream removed, broadcasting: 5 Mar 12 00:47:39.670: INFO: Exec stderr: "" Mar 12 00:47:39.670: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4983 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:47:39.670: INFO: >>> kubeConfig: /root/.kube/config I0312 00:47:39.693694 7 log.go:172] (0xc001de2dc0) (0xc0029bb040) Create stream I0312 00:47:39.693713 7 log.go:172] (0xc001de2dc0) (0xc0029bb040) Stream added, broadcasting: 1 I0312 00:47:39.695340 7 log.go:172] (0xc001de2dc0) Reply frame received for 1 I0312 00:47:39.695369 7 log.go:172] (0xc001de2dc0) (0xc0029bb0e0) Create stream I0312 00:47:39.695380 7 log.go:172] (0xc001de2dc0) (0xc0029bb0e0) Stream added, broadcasting: 3 I0312 00:47:39.696000 7 log.go:172] (0xc001de2dc0) Reply frame received for 3 I0312 00:47:39.696022 7 log.go:172] (0xc001de2dc0) (0xc001e732c0) Create stream I0312 00:47:39.696029 7 log.go:172] (0xc001de2dc0) (0xc001e732c0) Stream added, broadcasting: 5 I0312 00:47:39.696613 7 log.go:172] (0xc001de2dc0) Reply frame received for 5 I0312 00:47:39.757584 7 log.go:172] (0xc001de2dc0) Data frame received for 3 I0312 00:47:39.757611 7 log.go:172] (0xc0029bb0e0) (3) Data frame handling I0312 00:47:39.757623 7 log.go:172] (0xc0029bb0e0) (3) Data frame sent I0312 00:47:39.757641 7 log.go:172] (0xc001de2dc0) Data frame received for 3 I0312 00:47:39.757650 7 log.go:172] (0xc0029bb0e0) (3) Data frame handling I0312 00:47:39.757665 7 log.go:172] (0xc001de2dc0) Data frame received for 5 I0312 00:47:39.757683 7 log.go:172] (0xc001e732c0) (5) Data frame handling I0312 00:47:39.759509 7 log.go:172] (0xc001de2dc0) Data frame received for 1 I0312 00:47:39.759530 7 log.go:172] (0xc0029bb040) (1) Data frame handling I0312 00:47:39.759542 7 log.go:172] (0xc0029bb040) (1) Data frame sent I0312 00:47:39.759561 7 log.go:172] (0xc001de2dc0) (0xc0029bb040) Stream removed, broadcasting: 1 I0312 00:47:39.759576 7 log.go:172] (0xc001de2dc0) Go away received I0312 00:47:39.759681 7 log.go:172] (0xc001de2dc0) (0xc0029bb040) Stream removed, broadcasting: 1 I0312 00:47:39.759697 7 log.go:172] (0xc001de2dc0) (0xc0029bb0e0) Stream removed, broadcasting: 3 I0312 00:47:39.759704 7 log.go:172] (0xc001de2dc0) (0xc001e732c0) Stream removed, broadcasting: 5 Mar 12 00:47:39.759: INFO: Exec stderr: "" Mar 12 00:47:39.759: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4983 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:47:39.759: INFO: >>> kubeConfig: /root/.kube/config I0312 00:47:39.786716 7 log.go:172] (0xc001d36a50) (0xc001e735e0) Create stream I0312 00:47:39.786744 7 log.go:172] (0xc001d36a50) (0xc001e735e0) Stream added, broadcasting: 1 I0312 00:47:39.788746 7 log.go:172] (0xc001d36a50) Reply frame received for 1 I0312 00:47:39.788773 7 log.go:172] (0xc001d36a50) (0xc002a6f4a0) Create stream I0312 00:47:39.788783 7 log.go:172] (0xc001d36a50) (0xc002a6f4a0) Stream added, broadcasting: 3 I0312 00:47:39.789519 7 log.go:172] (0xc001d36a50) Reply frame received for 3 I0312 00:47:39.789550 7 log.go:172] (0xc001d36a50) (0xc0016a74a0) Create stream I0312 00:47:39.789563 7 log.go:172] (0xc001d36a50) (0xc0016a74a0) Stream added, broadcasting: 5 I0312 00:47:39.790316 7 log.go:172] (0xc001d36a50) Reply frame received for 5 I0312 00:47:39.852658 7 log.go:172] (0xc001d36a50) Data frame received for 3 I0312 00:47:39.852680 7 log.go:172] (0xc002a6f4a0) (3) Data frame handling I0312 00:47:39.852697 7 log.go:172] (0xc002a6f4a0) (3) Data frame sent I0312 00:47:39.852706 7 log.go:172] (0xc001d36a50) Data frame received for 3 I0312 00:47:39.852729 7 log.go:172] (0xc001d36a50) Data frame received for 5 I0312 00:47:39.852755 7 log.go:172] (0xc0016a74a0) (5) Data frame handling I0312 00:47:39.852777 7 log.go:172] (0xc002a6f4a0) (3) Data frame handling I0312 00:47:39.853877 7 log.go:172] (0xc001d36a50) Data frame received for 1 I0312 00:47:39.853892 7 log.go:172] (0xc001e735e0) (1) Data frame handling I0312 00:47:39.853904 7 log.go:172] (0xc001e735e0) (1) Data frame sent I0312 00:47:39.853917 7 log.go:172] (0xc001d36a50) (0xc001e735e0) Stream removed, broadcasting: 1 I0312 00:47:39.853937 7 log.go:172] (0xc001d36a50) Go away received I0312 00:47:39.853987 7 log.go:172] (0xc001d36a50) (0xc001e735e0) Stream removed, broadcasting: 1 I0312 00:47:39.854001 7 log.go:172] (0xc001d36a50) (0xc002a6f4a0) Stream removed, broadcasting: 3 I0312 00:47:39.854013 7 log.go:172] (0xc001d36a50) (0xc0016a74a0) Stream removed, broadcasting: 5 Mar 12 00:47:39.854: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:47:39.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4983" for this suite. • [SLOW TEST:9.284 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":259,"skipped":4201,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:47:39.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4077 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4077;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4077 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4077;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4077.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4077.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4077.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4077.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4077.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4077.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4077.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4077.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4077.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4077.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4077.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 103.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.103_udp@PTR;check="$$(dig +tcp +noall +answer +search 103.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.103_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4077 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4077;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4077 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4077;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4077.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4077.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4077.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4077.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4077.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4077.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4077.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4077.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4077.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4077.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4077.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4077.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 103.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.103_udp@PTR;check="$$(dig +tcp +noall +answer +search 103.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.103_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 00:47:44.013: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.016: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.019: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.021: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.023: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.025: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.028: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.030: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.048: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.050: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.052: INFO: Unable to read jessie_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.054: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.056: INFO: Unable to read jessie_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.058: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.060: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.062: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:44.077: INFO: Lookups using dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4077 wheezy_tcp@dns-test-service.dns-4077 wheezy_udp@dns-test-service.dns-4077.svc wheezy_tcp@dns-test-service.dns-4077.svc wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4077 jessie_tcp@dns-test-service.dns-4077 jessie_udp@dns-test-service.dns-4077.svc jessie_tcp@dns-test-service.dns-4077.svc jessie_udp@_http._tcp.dns-test-service.dns-4077.svc jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc] Mar 12 00:47:49.082: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.085: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.092: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.096: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.099: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.102: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.105: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.109: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.182: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.203: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.206: INFO: Unable to read jessie_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.208: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.210: INFO: Unable to read jessie_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.211: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.213: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.215: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:49.233: INFO: Lookups using dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4077 wheezy_tcp@dns-test-service.dns-4077 wheezy_udp@dns-test-service.dns-4077.svc wheezy_tcp@dns-test-service.dns-4077.svc wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4077 jessie_tcp@dns-test-service.dns-4077 jessie_udp@dns-test-service.dns-4077.svc jessie_tcp@dns-test-service.dns-4077.svc jessie_udp@_http._tcp.dns-test-service.dns-4077.svc jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc] Mar 12 00:47:54.081: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.086: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.091: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.095: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.097: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.100: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.102: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.104: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.118: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.120: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.121: INFO: Unable to read jessie_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.123: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.125: INFO: Unable to read jessie_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.129: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.130: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:54.142: INFO: Lookups using dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4077 wheezy_tcp@dns-test-service.dns-4077 wheezy_udp@dns-test-service.dns-4077.svc wheezy_tcp@dns-test-service.dns-4077.svc wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4077 jessie_tcp@dns-test-service.dns-4077 jessie_udp@dns-test-service.dns-4077.svc jessie_tcp@dns-test-service.dns-4077.svc jessie_udp@_http._tcp.dns-test-service.dns-4077.svc jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc] Mar 12 00:47:59.082: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.086: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.089: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.092: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.096: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.099: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.102: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.105: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.126: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.129: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.132: INFO: Unable to read jessie_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.135: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.138: INFO: Unable to read jessie_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.140: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.143: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.145: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:47:59.163: INFO: Lookups using dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4077 wheezy_tcp@dns-test-service.dns-4077 wheezy_udp@dns-test-service.dns-4077.svc wheezy_tcp@dns-test-service.dns-4077.svc wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4077 jessie_tcp@dns-test-service.dns-4077 jessie_udp@dns-test-service.dns-4077.svc jessie_tcp@dns-test-service.dns-4077.svc jessie_udp@_http._tcp.dns-test-service.dns-4077.svc jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc] Mar 12 00:48:04.120: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.123: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.125: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.127: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.130: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.133: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.135: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.137: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.152: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.154: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.157: INFO: Unable to read jessie_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.159: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.161: INFO: Unable to read jessie_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.163: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.165: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.167: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:04.186: INFO: Lookups using dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4077 wheezy_tcp@dns-test-service.dns-4077 wheezy_udp@dns-test-service.dns-4077.svc wheezy_tcp@dns-test-service.dns-4077.svc wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4077 jessie_tcp@dns-test-service.dns-4077 jessie_udp@dns-test-service.dns-4077.svc jessie_tcp@dns-test-service.dns-4077.svc jessie_udp@_http._tcp.dns-test-service.dns-4077.svc jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc] Mar 12 00:48:09.081: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.083: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.085: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.087: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.089: INFO: Unable to read wheezy_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.091: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.093: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.094: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.106: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.107: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.109: INFO: Unable to read jessie_udp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077 from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.112: INFO: Unable to read jessie_udp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.115: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.117: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc from pod dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e: the server could not find the requested resource (get pods dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e) Mar 12 00:48:09.127: INFO: Lookups using dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4077 wheezy_tcp@dns-test-service.dns-4077 wheezy_udp@dns-test-service.dns-4077.svc wheezy_tcp@dns-test-service.dns-4077.svc wheezy_udp@_http._tcp.dns-test-service.dns-4077.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4077.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4077 jessie_tcp@dns-test-service.dns-4077 jessie_udp@dns-test-service.dns-4077.svc jessie_tcp@dns-test-service.dns-4077.svc jessie_udp@_http._tcp.dns-test-service.dns-4077.svc jessie_tcp@_http._tcp.dns-test-service.dns-4077.svc] Mar 12 00:48:14.179: INFO: DNS probes using dns-4077/dns-test-fea1c3e2-f751-4a79-896c-2eac4574f59e succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:48:14.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4077" for this suite. • [SLOW TEST:34.600 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":260,"skipped":4206,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:48:14.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-47 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 00:48:14.510: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 12 00:48:14.575: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 12 00:48:16.586: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:48:18.579: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:48:20.579: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:48:22.591: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:48:24.578: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:48:26.578: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:48:28.578: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:48:30.579: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:48:32.577: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 00:48:34.578: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 12 00:48:34.580: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 12 00:48:36.598: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.200:8080/dial?request=hostname&protocol=udp&host=10.244.1.199&port=8081&tries=1'] Namespace:pod-network-test-47 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:48:36.598: INFO: >>> kubeConfig: /root/.kube/config I0312 00:48:36.632770 7 log.go:172] (0xc001df4420) (0xc0023ef180) Create stream I0312 00:48:36.632811 7 log.go:172] (0xc001df4420) (0xc0023ef180) Stream added, broadcasting: 1 I0312 00:48:36.634428 7 log.go:172] (0xc001df4420) Reply frame received for 1 I0312 00:48:36.634465 7 log.go:172] (0xc001df4420) (0xc00281ebe0) Create stream I0312 00:48:36.634477 7 log.go:172] (0xc001df4420) (0xc00281ebe0) Stream added, broadcasting: 3 I0312 00:48:36.635491 7 log.go:172] (0xc001df4420) Reply frame received for 3 I0312 00:48:36.635521 7 log.go:172] (0xc001df4420) (0xc0023ef220) Create stream I0312 00:48:36.635535 7 log.go:172] (0xc001df4420) (0xc0023ef220) Stream added, broadcasting: 5 I0312 00:48:36.636586 7 log.go:172] (0xc001df4420) Reply frame received for 5 I0312 00:48:36.706540 7 log.go:172] (0xc001df4420) Data frame received for 3 I0312 00:48:36.706567 7 log.go:172] (0xc00281ebe0) (3) Data frame handling I0312 00:48:36.706586 7 log.go:172] (0xc00281ebe0) (3) Data frame sent I0312 00:48:36.706937 7 log.go:172] (0xc001df4420) Data frame received for 5 I0312 00:48:36.706965 7 log.go:172] (0xc0023ef220) (5) Data frame handling I0312 00:48:36.707184 7 log.go:172] (0xc001df4420) Data frame received for 3 I0312 00:48:36.707202 7 log.go:172] (0xc00281ebe0) (3) Data frame handling I0312 00:48:36.708760 7 log.go:172] (0xc001df4420) Data frame received for 1 I0312 00:48:36.708781 7 log.go:172] (0xc0023ef180) (1) Data frame handling I0312 00:48:36.708792 7 log.go:172] (0xc0023ef180) (1) Data frame sent I0312 00:48:36.708808 7 log.go:172] (0xc001df4420) (0xc0023ef180) Stream removed, broadcasting: 1 I0312 00:48:36.708825 7 log.go:172] (0xc001df4420) Go away received I0312 00:48:36.708932 7 log.go:172] (0xc001df4420) (0xc0023ef180) Stream removed, broadcasting: 1 I0312 00:48:36.708966 7 log.go:172] (0xc001df4420) (0xc00281ebe0) Stream removed, broadcasting: 3 I0312 00:48:36.708988 7 log.go:172] (0xc001df4420) (0xc0023ef220) Stream removed, broadcasting: 5 Mar 12 00:48:36.709: INFO: Waiting for responses: map[] Mar 12 00:48:36.712: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.200:8080/dial?request=hostname&protocol=udp&host=10.244.2.58&port=8081&tries=1'] Namespace:pod-network-test-47 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 00:48:36.712: INFO: >>> kubeConfig: /root/.kube/config I0312 00:48:36.741855 7 log.go:172] (0xc0023a02c0) (0xc0029106e0) Create stream I0312 00:48:36.741882 7 log.go:172] (0xc0023a02c0) (0xc0029106e0) Stream added, broadcasting: 1 I0312 00:48:36.743668 7 log.go:172] (0xc0023a02c0) Reply frame received for 1 I0312 00:48:36.743698 7 log.go:172] (0xc0023a02c0) (0xc0023ef2c0) Create stream I0312 00:48:36.743710 7 log.go:172] (0xc0023a02c0) (0xc0023ef2c0) Stream added, broadcasting: 3 I0312 00:48:36.744464 7 log.go:172] (0xc0023a02c0) Reply frame received for 3 I0312 00:48:36.744488 7 log.go:172] (0xc0023a02c0) (0xc0023ef360) Create stream I0312 00:48:36.744498 7 log.go:172] (0xc0023a02c0) (0xc0023ef360) Stream added, broadcasting: 5 I0312 00:48:36.745202 7 log.go:172] (0xc0023a02c0) Reply frame received for 5 I0312 00:48:36.828879 7 log.go:172] (0xc0023a02c0) Data frame received for 3 I0312 00:48:36.828901 7 log.go:172] (0xc0023ef2c0) (3) Data frame handling I0312 00:48:36.828911 7 log.go:172] (0xc0023ef2c0) (3) Data frame sent I0312 00:48:36.829337 7 log.go:172] (0xc0023a02c0) Data frame received for 5 I0312 00:48:36.829361 7 log.go:172] (0xc0023ef360) (5) Data frame handling I0312 00:48:36.829387 7 log.go:172] (0xc0023a02c0) Data frame received for 3 I0312 00:48:36.829413 7 log.go:172] (0xc0023ef2c0) (3) Data frame handling I0312 00:48:36.831110 7 log.go:172] (0xc0023a02c0) Data frame received for 1 I0312 00:48:36.831136 7 log.go:172] (0xc0029106e0) (1) Data frame handling I0312 00:48:36.831160 7 log.go:172] (0xc0029106e0) (1) Data frame sent I0312 00:48:36.831260 7 log.go:172] (0xc0023a02c0) (0xc0029106e0) Stream removed, broadcasting: 1 I0312 00:48:36.831290 7 log.go:172] (0xc0023a02c0) Go away received I0312 00:48:36.831436 7 log.go:172] (0xc0023a02c0) (0xc0029106e0) Stream removed, broadcasting: 1 I0312 00:48:36.831462 7 log.go:172] (0xc0023a02c0) (0xc0023ef2c0) Stream removed, broadcasting: 3 I0312 00:48:36.831487 7 log.go:172] (0xc0023a02c0) (0xc0023ef360) Stream removed, broadcasting: 5 Mar 12 00:48:36.831: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:48:36.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-47" for this suite. • [SLOW TEST:22.371 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":261,"skipped":4263,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:48:36.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Mar 12 00:48:39.410: INFO: Successfully updated pod "labelsupdate5e469341-fc7f-493f-9f19-9bae7546d39b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:48:41.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5822" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":262,"skipped":4277,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:48:41.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-01e7d8f9-a306-4799-95e4-051abfb373e4 STEP: Creating a pod to test consume secrets Mar 12 00:48:41.522: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b7a6b1c1-1e00-46d6-a2d2-7ed014765bbc" in namespace "projected-6284" to be "success or failure" Mar 12 00:48:41.535: INFO: Pod "pod-projected-secrets-b7a6b1c1-1e00-46d6-a2d2-7ed014765bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.249598ms Mar 12 00:48:43.539: INFO: Pod "pod-projected-secrets-b7a6b1c1-1e00-46d6-a2d2-7ed014765bbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016851844s STEP: Saw pod success Mar 12 00:48:43.539: INFO: Pod "pod-projected-secrets-b7a6b1c1-1e00-46d6-a2d2-7ed014765bbc" satisfied condition "success or failure" Mar 12 00:48:43.542: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-b7a6b1c1-1e00-46d6-a2d2-7ed014765bbc container projected-secret-volume-test: STEP: delete the pod Mar 12 00:48:43.584: INFO: Waiting for pod pod-projected-secrets-b7a6b1c1-1e00-46d6-a2d2-7ed014765bbc to disappear Mar 12 00:48:43.592: INFO: Pod pod-projected-secrets-b7a6b1c1-1e00-46d6-a2d2-7ed014765bbc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:48:43.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6284" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":263,"skipped":4305,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:48:43.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 00:48:43.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5592' Mar 12 00:48:43.762: INFO: stderr: "" Mar 12 00:48:43.762: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 12 00:48:48.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5592 -o json' Mar 12 00:48:48.907: INFO: stderr: "" Mar 12 00:48:48.907: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-12T00:48:43Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5592\",\n \"resourceVersion\": \"951224\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5592/pods/e2e-test-httpd-pod\",\n \"uid\": \"be702fac-6a49-43f8-8033-ff164cb7e271\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-85qnj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-85qnj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-85qnj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T00:48:43Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T00:48:45Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T00:48:45Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T00:48:43Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://46e52fcb4d01eb60cd9262ca35d30418acfbd8cb260c79fffc96430a2111712c\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-12T00:48:44Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.16\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.203\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.203\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-12T00:48:43Z\"\n }\n}\n" STEP: replace the image in the pod Mar 12 00:48:48.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5592' Mar 12 00:48:49.122: INFO: stderr: "" Mar 12 00:48:49.122: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904 Mar 12 00:48:49.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5592' Mar 12 00:49:02.489: INFO: stderr: "" Mar 12 00:49:02.489: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:49:02.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5592" for this suite. • [SLOW TEST:18.902 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":280,"completed":264,"skipped":4348,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:49:02.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:49:02.559: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 12 00:49:02.566: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 12 00:49:07.569: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 12 00:49:07.570: INFO: Creating deployment "test-rolling-update-deployment" Mar 12 00:49:07.572: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 12 00:49:07.605: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 12 00:49:09.650: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 12 00:49:09.652: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 12 00:49:09.657: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2359 /apis/apps/v1/namespaces/deployment-2359/deployments/test-rolling-update-deployment 4a719b57-a625-4618-96c7-1321b9986072 951393 1 2020-03-12 00:49:07 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00329cd78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-12 00:49:07 +0000 UTC,LastTransitionTime:2020-03-12 00:49:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-12 00:49:09 +0000 UTC,LastTransitionTime:2020-03-12 00:49:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 12 00:49:09.659: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2359 /apis/apps/v1/namespaces/deployment-2359/replicasets/test-rolling-update-deployment-67cf4f6444 4746061b-0dcd-4d13-b37f-804b040e168c 951382 1 2020-03-12 00:49:07 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 4a719b57-a625-4618-96c7-1321b9986072 0xc0038b2ac7 0xc0038b2ac8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038b2b38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 12 00:49:09.659: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 12 00:49:09.659: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2359 /apis/apps/v1/namespaces/deployment-2359/replicasets/test-rolling-update-controller 7cdfcbe0-9403-4277-a008-789d1ee2644b 951391 2 2020-03-12 00:49:02 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 4a719b57-a625-4618-96c7-1321b9986072 0xc0038b29f7 0xc0038b29f8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0038b2a58 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 00:49:09.661: INFO: Pod "test-rolling-update-deployment-67cf4f6444-r6xg2" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-r6xg2 test-rolling-update-deployment-67cf4f6444- deployment-2359 /api/v1/namespaces/deployment-2359/pods/test-rolling-update-deployment-67cf4f6444-r6xg2 70748568-afa4-413c-b56d-377fba3295ab 951381 0 2020-03-12 00:49:07 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 4746061b-0dcd-4d13-b37f-804b040e168c 0xc0038b32e7 0xc0038b32e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cswdl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cswdl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cswdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:49:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:49:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:49:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:49:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.205,StartTime:2020-03-12 00:49:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:49:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8743e64076ed30d24d6ea9d0e7f5d32ca458fa8bc2adf8406ea322edeed2363f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.205,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:49:09.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2359" for this suite. • [SLOW TEST:7.165 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":265,"skipped":4377,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:49:09.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 00:49:10.289: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 00:49:12.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719570950, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719570950, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719570950, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719570950, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:49:15.315: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:49:15.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9236" for this suite. STEP: Destroying namespace "webhook-9236-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.819 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":266,"skipped":4392,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:49:15.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-9748f0ab-901a-4edb-93e9-ee04c6b4a91d STEP: Creating a pod to test consume secrets Mar 12 00:49:15.643: INFO: Waiting up to 5m0s for pod "pod-secrets-4d2f753a-d9ec-4a49-b7ba-49ce29182375" in namespace "secrets-1580" to be "success or failure" Mar 12 00:49:15.671: INFO: Pod "pod-secrets-4d2f753a-d9ec-4a49-b7ba-49ce29182375": Phase="Pending", Reason="", readiness=false. Elapsed: 28.547286ms Mar 12 00:49:17.695: INFO: Pod "pod-secrets-4d2f753a-d9ec-4a49-b7ba-49ce29182375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.052437602s STEP: Saw pod success Mar 12 00:49:17.695: INFO: Pod "pod-secrets-4d2f753a-d9ec-4a49-b7ba-49ce29182375" satisfied condition "success or failure" Mar 12 00:49:17.697: INFO: Trying to get logs from node latest-worker pod pod-secrets-4d2f753a-d9ec-4a49-b7ba-49ce29182375 container secret-volume-test: STEP: delete the pod Mar 12 00:49:17.714: INFO: Waiting for pod pod-secrets-4d2f753a-d9ec-4a49-b7ba-49ce29182375 to disappear Mar 12 00:49:17.727: INFO: Pod pod-secrets-4d2f753a-d9ec-4a49-b7ba-49ce29182375 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:49:17.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1580" for this suite. STEP: Destroying namespace "secret-namespace-6808" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":267,"skipped":4398,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:49:17.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5867 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5867 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5867 Mar 12 00:49:17.821: INFO: Found 0 stateful pods, waiting for 1 Mar 12 00:49:27.824: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 12 00:49:27.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5867 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 00:49:28.075: INFO: stderr: "I0312 00:49:27.955615 4291 log.go:172] (0xc0009596b0) (0xc000a6c820) Create stream\nI0312 00:49:27.955651 4291 log.go:172] (0xc0009596b0) (0xc000a6c820) Stream added, broadcasting: 1\nI0312 00:49:27.957259 4291 log.go:172] (0xc0009596b0) Reply frame received for 1\nI0312 00:49:27.957285 4291 log.go:172] (0xc0009596b0) (0xc000795360) Create stream\nI0312 00:49:27.957295 4291 log.go:172] (0xc0009596b0) (0xc000795360) Stream added, broadcasting: 3\nI0312 00:49:27.957853 4291 log.go:172] (0xc0009596b0) Reply frame received for 3\nI0312 00:49:27.957889 4291 log.go:172] (0xc0009596b0) (0xc000a6c8c0) Create stream\nI0312 00:49:27.957896 4291 log.go:172] (0xc0009596b0) (0xc000a6c8c0) Stream added, broadcasting: 5\nI0312 00:49:27.958536 4291 log.go:172] (0xc0009596b0) Reply frame received for 5\nI0312 00:49:28.045000 4291 log.go:172] (0xc0009596b0) Data frame received for 5\nI0312 00:49:28.045022 4291 log.go:172] (0xc000a6c8c0) (5) Data frame handling\nI0312 00:49:28.045032 4291 log.go:172] (0xc000a6c8c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 00:49:28.069942 4291 log.go:172] (0xc0009596b0) Data frame received for 3\nI0312 00:49:28.070000 4291 log.go:172] (0xc000795360) (3) Data frame handling\nI0312 00:49:28.070015 4291 log.go:172] (0xc000795360) (3) Data frame sent\nI0312 00:49:28.070024 4291 log.go:172] (0xc0009596b0) Data frame received for 3\nI0312 00:49:28.070032 4291 log.go:172] (0xc000795360) (3) Data frame handling\nI0312 00:49:28.070059 4291 log.go:172] (0xc0009596b0) Data frame received for 5\nI0312 00:49:28.070069 4291 log.go:172] (0xc000a6c8c0) (5) Data frame handling\nI0312 00:49:28.071808 4291 log.go:172] (0xc0009596b0) Data frame received for 1\nI0312 00:49:28.071827 4291 log.go:172] (0xc000a6c820) (1) Data frame handling\nI0312 00:49:28.071838 4291 log.go:172] (0xc000a6c820) (1) Data frame sent\nI0312 00:49:28.071850 4291 log.go:172] (0xc0009596b0) (0xc000a6c820) Stream removed, broadcasting: 1\nI0312 00:49:28.071867 4291 log.go:172] (0xc0009596b0) Go away received\nI0312 00:49:28.072110 4291 log.go:172] (0xc0009596b0) (0xc000a6c820) Stream removed, broadcasting: 1\nI0312 00:49:28.072125 4291 log.go:172] (0xc0009596b0) (0xc000795360) Stream removed, broadcasting: 3\nI0312 00:49:28.072131 4291 log.go:172] (0xc0009596b0) (0xc000a6c8c0) Stream removed, broadcasting: 5\n" Mar 12 00:49:28.075: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 00:49:28.075: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 00:49:28.079: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 12 00:49:38.082: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 00:49:38.082: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 00:49:38.112: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999648s Mar 12 00:49:39.117: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.975313064s Mar 12 00:49:40.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.970832033s Mar 12 00:49:41.125: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.967016748s Mar 12 00:49:42.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.963044228s Mar 12 00:49:43.133: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.958173554s Mar 12 00:49:44.137: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.954571701s Mar 12 00:49:45.141: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.950913841s Mar 12 00:49:46.145: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.94712575s Mar 12 00:49:47.149: INFO: Verifying statefulset ss doesn't scale past 1 for another 943.0074ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5867 Mar 12 00:49:48.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5867 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 00:49:48.353: INFO: stderr: "I0312 00:49:48.289911 4311 log.go:172] (0xc000ada9a0) (0xc00097a000) Create stream\nI0312 00:49:48.289957 4311 log.go:172] (0xc000ada9a0) (0xc00097a000) Stream added, broadcasting: 1\nI0312 00:49:48.292035 4311 log.go:172] (0xc000ada9a0) Reply frame received for 1\nI0312 00:49:48.292073 4311 log.go:172] (0xc000ada9a0) (0xc000ab2460) Create stream\nI0312 00:49:48.292082 4311 log.go:172] (0xc000ada9a0) (0xc000ab2460) Stream added, broadcasting: 3\nI0312 00:49:48.292879 4311 log.go:172] (0xc000ada9a0) Reply frame received for 3\nI0312 00:49:48.292908 4311 log.go:172] (0xc000ada9a0) (0xc00097a0a0) Create stream\nI0312 00:49:48.292916 4311 log.go:172] (0xc000ada9a0) (0xc00097a0a0) Stream added, broadcasting: 5\nI0312 00:49:48.293706 4311 log.go:172] (0xc000ada9a0) Reply frame received for 5\nI0312 00:49:48.348187 4311 log.go:172] (0xc000ada9a0) Data frame received for 3\nI0312 00:49:48.348212 4311 log.go:172] (0xc000ab2460) (3) Data frame handling\nI0312 00:49:48.348219 4311 log.go:172] (0xc000ab2460) (3) Data frame sent\nI0312 00:49:48.348224 4311 log.go:172] (0xc000ada9a0) Data frame received for 3\nI0312 00:49:48.348228 4311 log.go:172] (0xc000ab2460) (3) Data frame handling\nI0312 00:49:48.348248 4311 log.go:172] (0xc000ada9a0) Data frame received for 5\nI0312 00:49:48.348253 4311 log.go:172] (0xc00097a0a0) (5) Data frame handling\nI0312 00:49:48.348259 4311 log.go:172] (0xc00097a0a0) (5) Data frame sent\nI0312 00:49:48.348264 4311 log.go:172] (0xc000ada9a0) Data frame received for 5\nI0312 00:49:48.348271 4311 log.go:172] (0xc00097a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 00:49:48.349557 4311 log.go:172] (0xc000ada9a0) Data frame received for 1\nI0312 00:49:48.349588 4311 log.go:172] (0xc00097a000) (1) Data frame handling\nI0312 00:49:48.349636 4311 log.go:172] (0xc00097a000) (1) Data frame sent\nI0312 00:49:48.349658 4311 log.go:172] (0xc000ada9a0) (0xc00097a000) Stream removed, broadcasting: 1\nI0312 00:49:48.349673 4311 log.go:172] (0xc000ada9a0) Go away received\nI0312 00:49:48.350380 4311 log.go:172] (0xc000ada9a0) (0xc00097a000) Stream removed, broadcasting: 1\nI0312 00:49:48.350404 4311 log.go:172] (0xc000ada9a0) (0xc000ab2460) Stream removed, broadcasting: 3\nI0312 00:49:48.350417 4311 log.go:172] (0xc000ada9a0) (0xc00097a0a0) Stream removed, broadcasting: 5\n" Mar 12 00:49:48.353: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 00:49:48.353: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 00:49:48.356: INFO: Found 1 stateful pods, waiting for 3 Mar 12 00:49:58.360: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 00:49:58.360: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 00:49:58.360: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 12 00:49:58.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5867 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 00:49:58.701: INFO: stderr: "I0312 00:49:58.637692 4333 log.go:172] (0xc000ad5290) (0xc000b74780) Create stream\nI0312 00:49:58.637715 4333 log.go:172] (0xc000ad5290) (0xc000b74780) Stream added, broadcasting: 1\nI0312 00:49:58.641564 4333 log.go:172] (0xc000ad5290) Reply frame received for 1\nI0312 00:49:58.641603 4333 log.go:172] (0xc000ad5290) (0xc00065dae0) Create stream\nI0312 00:49:58.641615 4333 log.go:172] (0xc000ad5290) (0xc00065dae0) Stream added, broadcasting: 3\nI0312 00:49:58.642589 4333 log.go:172] (0xc000ad5290) Reply frame received for 3\nI0312 00:49:58.642626 4333 log.go:172] (0xc000ad5290) (0xc0005106e0) Create stream\nI0312 00:49:58.642640 4333 log.go:172] (0xc000ad5290) (0xc0005106e0) Stream added, broadcasting: 5\nI0312 00:49:58.643313 4333 log.go:172] (0xc000ad5290) Reply frame received for 5\nI0312 00:49:58.697117 4333 log.go:172] (0xc000ad5290) Data frame received for 5\nI0312 00:49:58.697149 4333 log.go:172] (0xc0005106e0) (5) Data frame handling\nI0312 00:49:58.697160 4333 log.go:172] (0xc0005106e0) (5) Data frame sent\nI0312 00:49:58.697169 4333 log.go:172] (0xc000ad5290) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 00:49:58.697174 4333 log.go:172] (0xc0005106e0) (5) Data frame handling\nI0312 00:49:58.697242 4333 log.go:172] (0xc000ad5290) Data frame received for 3\nI0312 00:49:58.697277 4333 log.go:172] (0xc00065dae0) (3) Data frame handling\nI0312 00:49:58.697305 4333 log.go:172] (0xc00065dae0) (3) Data frame sent\nI0312 00:49:58.697321 4333 log.go:172] (0xc000ad5290) Data frame received for 3\nI0312 00:49:58.697333 4333 log.go:172] (0xc00065dae0) (3) Data frame handling\nI0312 00:49:58.698420 4333 log.go:172] (0xc000ad5290) Data frame received for 1\nI0312 00:49:58.698440 4333 log.go:172] (0xc000b74780) (1) Data frame handling\nI0312 00:49:58.698454 4333 log.go:172] (0xc000b74780) (1) Data frame sent\nI0312 00:49:58.698468 4333 log.go:172] (0xc000ad5290) (0xc000b74780) Stream removed, broadcasting: 1\nI0312 00:49:58.698489 4333 log.go:172] (0xc000ad5290) Go away received\nI0312 00:49:58.698790 4333 log.go:172] (0xc000ad5290) (0xc000b74780) Stream removed, broadcasting: 1\nI0312 00:49:58.698806 4333 log.go:172] (0xc000ad5290) (0xc00065dae0) Stream removed, broadcasting: 3\nI0312 00:49:58.698813 4333 log.go:172] (0xc000ad5290) (0xc0005106e0) Stream removed, broadcasting: 5\n" Mar 12 00:49:58.701: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 00:49:58.701: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 00:49:58.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5867 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 00:49:59.421: INFO: stderr: "I0312 00:49:59.310768 4356 log.go:172] (0xc0009720b0) (0xc0006e4000) Create stream\nI0312 00:49:59.310800 4356 log.go:172] (0xc0009720b0) (0xc0006e4000) Stream added, broadcasting: 1\nI0312 00:49:59.312726 4356 log.go:172] (0xc0009720b0) Reply frame received for 1\nI0312 00:49:59.312749 4356 log.go:172] (0xc0009720b0) (0xc0006e40a0) Create stream\nI0312 00:49:59.312756 4356 log.go:172] (0xc0009720b0) (0xc0006e40a0) Stream added, broadcasting: 3\nI0312 00:49:59.313446 4356 log.go:172] (0xc0009720b0) Reply frame received for 3\nI0312 00:49:59.313474 4356 log.go:172] (0xc0009720b0) (0xc0006c9d60) Create stream\nI0312 00:49:59.313484 4356 log.go:172] (0xc0009720b0) (0xc0006c9d60) Stream added, broadcasting: 5\nI0312 00:49:59.314365 4356 log.go:172] (0xc0009720b0) Reply frame received for 5\nI0312 00:49:59.391328 4356 log.go:172] (0xc0009720b0) Data frame received for 5\nI0312 00:49:59.391344 4356 log.go:172] (0xc0006c9d60) (5) Data frame handling\nI0312 00:49:59.391354 4356 log.go:172] (0xc0006c9d60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 00:49:59.417261 4356 log.go:172] (0xc0009720b0) Data frame received for 3\nI0312 00:49:59.417278 4356 log.go:172] (0xc0006e40a0) (3) Data frame handling\nI0312 00:49:59.417288 4356 log.go:172] (0xc0006e40a0) (3) Data frame sent\nI0312 00:49:59.417495 4356 log.go:172] (0xc0009720b0) Data frame received for 5\nI0312 00:49:59.417520 4356 log.go:172] (0xc0006c9d60) (5) Data frame handling\nI0312 00:49:59.417542 4356 log.go:172] (0xc0009720b0) Data frame received for 3\nI0312 00:49:59.417555 4356 log.go:172] (0xc0006e40a0) (3) Data frame handling\nI0312 00:49:59.418326 4356 log.go:172] (0xc0009720b0) Data frame received for 1\nI0312 00:49:59.418336 4356 log.go:172] (0xc0006e4000) (1) Data frame handling\nI0312 00:49:59.418341 4356 log.go:172] (0xc0006e4000) (1) Data frame sent\nI0312 00:49:59.418386 4356 log.go:172] (0xc0009720b0) (0xc0006e4000) Stream removed, broadcasting: 1\nI0312 00:49:59.418400 4356 log.go:172] (0xc0009720b0) Go away received\nI0312 00:49:59.418620 4356 log.go:172] (0xc0009720b0) (0xc0006e4000) Stream removed, broadcasting: 1\nI0312 00:49:59.418640 4356 log.go:172] (0xc0009720b0) (0xc0006e40a0) Stream removed, broadcasting: 3\nI0312 00:49:59.418648 4356 log.go:172] (0xc0009720b0) (0xc0006c9d60) Stream removed, broadcasting: 5\n" Mar 12 00:49:59.421: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 00:49:59.421: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 00:49:59.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5867 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 00:50:00.602: INFO: stderr: "I0312 00:50:00.521094 4385 log.go:172] (0xc000ace000) (0xc000b98000) Create stream\nI0312 00:50:00.521126 4385 log.go:172] (0xc000ace000) (0xc000b98000) Stream added, broadcasting: 1\nI0312 00:50:00.523252 4385 log.go:172] (0xc000ace000) Reply frame received for 1\nI0312 00:50:00.523283 4385 log.go:172] (0xc000ace000) (0xc00080c000) Create stream\nI0312 00:50:00.523291 4385 log.go:172] (0xc000ace000) (0xc00080c000) Stream added, broadcasting: 3\nI0312 00:50:00.524020 4385 log.go:172] (0xc000ace000) Reply frame received for 3\nI0312 00:50:00.524047 4385 log.go:172] (0xc000ace000) (0xc00089e000) Create stream\nI0312 00:50:00.524054 4385 log.go:172] (0xc000ace000) (0xc00089e000) Stream added, broadcasting: 5\nI0312 00:50:00.524750 4385 log.go:172] (0xc000ace000) Reply frame received for 5\nI0312 00:50:00.580418 4385 log.go:172] (0xc000ace000) Data frame received for 5\nI0312 00:50:00.580453 4385 log.go:172] (0xc00089e000) (5) Data frame handling\nI0312 00:50:00.580473 4385 log.go:172] (0xc00089e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 00:50:00.595379 4385 log.go:172] (0xc000ace000) Data frame received for 3\nI0312 00:50:00.595405 4385 log.go:172] (0xc00080c000) (3) Data frame handling\nI0312 00:50:00.595414 4385 log.go:172] (0xc00080c000) (3) Data frame sent\nI0312 00:50:00.595421 4385 log.go:172] (0xc000ace000) Data frame received for 3\nI0312 00:50:00.595429 4385 log.go:172] (0xc00080c000) (3) Data frame handling\nI0312 00:50:00.595459 4385 log.go:172] (0xc000ace000) Data frame received for 5\nI0312 00:50:00.595478 4385 log.go:172] (0xc00089e000) (5) Data frame handling\nI0312 00:50:00.597706 4385 log.go:172] (0xc000ace000) Data frame received for 1\nI0312 00:50:00.597731 4385 log.go:172] (0xc000b98000) (1) Data frame handling\nI0312 00:50:00.597739 4385 log.go:172] (0xc000b98000) (1) Data frame sent\nI0312 00:50:00.597747 4385 log.go:172] (0xc000ace000) (0xc000b98000) Stream removed, broadcasting: 1\nI0312 00:50:00.597989 4385 log.go:172] (0xc000ace000) (0xc000b98000) Stream removed, broadcasting: 1\nI0312 00:50:00.598003 4385 log.go:172] (0xc000ace000) (0xc00080c000) Stream removed, broadcasting: 3\nI0312 00:50:00.598010 4385 log.go:172] (0xc000ace000) (0xc00089e000) Stream removed, broadcasting: 5\n" Mar 12 00:50:00.602: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 00:50:00.602: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 00:50:00.602: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 00:50:00.604: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Mar 12 00:50:10.611: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 00:50:10.611: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 12 00:50:10.611: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 12 00:50:10.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999603s Mar 12 00:50:11.626: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993074682s Mar 12 00:50:12.630: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988652664s Mar 12 00:50:13.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984745766s Mar 12 00:50:14.639: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980498134s Mar 12 00:50:15.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.976407836s Mar 12 00:50:16.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.971932839s Mar 12 00:50:17.656: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.968388278s Mar 12 00:50:18.660: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958658375s Mar 12 00:50:19.665: INFO: Verifying statefulset ss doesn't scale past 3 for another 954.482866ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5867 Mar 12 00:50:20.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5867 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 00:50:20.865: INFO: stderr: "I0312 00:50:20.801646 4420 log.go:172] (0xc0003c4fd0) (0xc000669a40) Create stream\nI0312 00:50:20.801697 4420 log.go:172] (0xc0003c4fd0) (0xc000669a40) Stream added, broadcasting: 1\nI0312 00:50:20.803658 4420 log.go:172] (0xc0003c4fd0) Reply frame received for 1\nI0312 00:50:20.803694 4420 log.go:172] (0xc0003c4fd0) (0xc0008aa000) Create stream\nI0312 00:50:20.803704 4420 log.go:172] (0xc0003c4fd0) (0xc0008aa000) Stream added, broadcasting: 3\nI0312 00:50:20.804394 4420 log.go:172] (0xc0003c4fd0) Reply frame received for 3\nI0312 00:50:20.804423 4420 log.go:172] (0xc0003c4fd0) (0xc000669c20) Create stream\nI0312 00:50:20.804433 4420 log.go:172] (0xc0003c4fd0) (0xc000669c20) Stream added, broadcasting: 5\nI0312 00:50:20.805134 4420 log.go:172] (0xc0003c4fd0) Reply frame received for 5\nI0312 00:50:20.860905 4420 log.go:172] (0xc0003c4fd0) Data frame received for 5\nI0312 00:50:20.860926 4420 log.go:172] (0xc000669c20) (5) Data frame handling\nI0312 00:50:20.860937 4420 log.go:172] (0xc000669c20) (5) Data frame sent\nI0312 00:50:20.860944 4420 log.go:172] (0xc0003c4fd0) Data frame received for 5\nI0312 00:50:20.860951 4420 log.go:172] (0xc000669c20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 00:50:20.860988 4420 log.go:172] (0xc0003c4fd0) Data frame received for 3\nI0312 00:50:20.860995 4420 log.go:172] (0xc0008aa000) (3) Data frame handling\nI0312 00:50:20.861003 4420 log.go:172] (0xc0008aa000) (3) Data frame sent\nI0312 00:50:20.861010 4420 log.go:172] (0xc0003c4fd0) Data frame received for 3\nI0312 00:50:20.861016 4420 log.go:172] (0xc0008aa000) (3) Data frame handling\nI0312 00:50:20.862048 4420 log.go:172] (0xc0003c4fd0) Data frame received for 1\nI0312 00:50:20.862065 4420 log.go:172] (0xc000669a40) (1) Data frame handling\nI0312 00:50:20.862072 4420 log.go:172] (0xc000669a40) (1) Data frame sent\nI0312 00:50:20.862081 4420 log.go:172] (0xc0003c4fd0) (0xc000669a40) Stream removed, broadcasting: 1\nI0312 00:50:20.862100 4420 log.go:172] (0xc0003c4fd0) Go away received\nI0312 00:50:20.862369 4420 log.go:172] (0xc0003c4fd0) (0xc000669a40) Stream removed, broadcasting: 1\nI0312 00:50:20.862387 4420 log.go:172] (0xc0003c4fd0) (0xc0008aa000) Stream removed, broadcasting: 3\nI0312 00:50:20.862396 4420 log.go:172] (0xc0003c4fd0) (0xc000669c20) Stream removed, broadcasting: 5\n" Mar 12 00:50:20.865: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 00:50:20.865: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 00:50:20.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5867 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 00:50:21.052: INFO: stderr: "I0312 00:50:20.967867 4440 log.go:172] (0xc000bb3760) (0xc00098c8c0) Create stream\nI0312 00:50:20.967908 4440 log.go:172] (0xc000bb3760) (0xc00098c8c0) Stream added, broadcasting: 1\nI0312 00:50:20.971059 4440 log.go:172] (0xc000bb3760) Reply frame received for 1\nI0312 00:50:20.971088 4440 log.go:172] (0xc000bb3760) (0xc00098c000) Create stream\nI0312 00:50:20.971096 4440 log.go:172] (0xc000bb3760) (0xc00098c000) Stream added, broadcasting: 3\nI0312 00:50:20.971858 4440 log.go:172] (0xc000bb3760) Reply frame received for 3\nI0312 00:50:20.971879 4440 log.go:172] (0xc000bb3760) (0xc000737ae0) Create stream\nI0312 00:50:20.971887 4440 log.go:172] (0xc000bb3760) (0xc000737ae0) Stream added, broadcasting: 5\nI0312 00:50:20.972663 4440 log.go:172] (0xc000bb3760) Reply frame received for 5\nI0312 00:50:21.048585 4440 log.go:172] (0xc000bb3760) Data frame received for 5\nI0312 00:50:21.048608 4440 log.go:172] (0xc000737ae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 00:50:21.048639 4440 log.go:172] (0xc000bb3760) Data frame received for 3\nI0312 00:50:21.048672 4440 log.go:172] (0xc00098c000) (3) Data frame handling\nI0312 00:50:21.048687 4440 log.go:172] (0xc00098c000) (3) Data frame sent\nI0312 00:50:21.048700 4440 log.go:172] (0xc000bb3760) Data frame received for 3\nI0312 00:50:21.048712 4440 log.go:172] (0xc00098c000) (3) Data frame handling\nI0312 00:50:21.048726 4440 log.go:172] (0xc000737ae0) (5) Data frame sent\nI0312 00:50:21.048738 4440 log.go:172] (0xc000bb3760) Data frame received for 5\nI0312 00:50:21.048755 4440 log.go:172] (0xc000737ae0) (5) Data frame handling\nI0312 00:50:21.049627 4440 log.go:172] (0xc000bb3760) Data frame received for 1\nI0312 00:50:21.049645 4440 log.go:172] (0xc00098c8c0) (1) Data frame handling\nI0312 00:50:21.049661 4440 log.go:172] (0xc00098c8c0) (1) Data frame sent\nI0312 00:50:21.049828 4440 log.go:172] (0xc000bb3760) (0xc00098c8c0) Stream removed, broadcasting: 1\nI0312 00:50:21.049893 4440 log.go:172] (0xc000bb3760) Go away received\nI0312 00:50:21.050064 4440 log.go:172] (0xc000bb3760) (0xc00098c8c0) Stream removed, broadcasting: 1\nI0312 00:50:21.050076 4440 log.go:172] (0xc000bb3760) (0xc00098c000) Stream removed, broadcasting: 3\nI0312 00:50:21.050082 4440 log.go:172] (0xc000bb3760) (0xc000737ae0) Stream removed, broadcasting: 5\n" Mar 12 00:50:21.052: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 00:50:21.052: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 00:50:21.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5867 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 00:50:21.211: INFO: stderr: "I0312 00:50:21.147745 4460 log.go:172] (0xc0008cea50) (0xc0008aa140) Create stream\nI0312 00:50:21.147780 4460 log.go:172] (0xc0008cea50) (0xc0008aa140) Stream added, broadcasting: 1\nI0312 00:50:21.149387 4460 log.go:172] (0xc0008cea50) Reply frame received for 1\nI0312 00:50:21.149412 4460 log.go:172] (0xc0008cea50) (0xc000a34000) Create stream\nI0312 00:50:21.149422 4460 log.go:172] (0xc0008cea50) (0xc000a34000) Stream added, broadcasting: 3\nI0312 00:50:21.149939 4460 log.go:172] (0xc0008cea50) Reply frame received for 3\nI0312 00:50:21.149959 4460 log.go:172] (0xc0008cea50) (0xc0006a3b80) Create stream\nI0312 00:50:21.149965 4460 log.go:172] (0xc0008cea50) (0xc0006a3b80) Stream added, broadcasting: 5\nI0312 00:50:21.150495 4460 log.go:172] (0xc0008cea50) Reply frame received for 5\nI0312 00:50:21.207629 4460 log.go:172] (0xc0008cea50) Data frame received for 3\nI0312 00:50:21.207654 4460 log.go:172] (0xc000a34000) (3) Data frame handling\nI0312 00:50:21.207660 4460 log.go:172] (0xc000a34000) (3) Data frame sent\nI0312 00:50:21.207664 4460 log.go:172] (0xc0008cea50) Data frame received for 3\nI0312 00:50:21.207668 4460 log.go:172] (0xc000a34000) (3) Data frame handling\nI0312 00:50:21.207685 4460 log.go:172] (0xc0008cea50) Data frame received for 5\nI0312 00:50:21.207690 4460 log.go:172] (0xc0006a3b80) (5) Data frame handling\nI0312 00:50:21.207695 4460 log.go:172] (0xc0006a3b80) (5) Data frame sent\nI0312 00:50:21.207699 4460 log.go:172] (0xc0008cea50) Data frame received for 5\nI0312 00:50:21.207703 4460 log.go:172] (0xc0006a3b80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 00:50:21.208603 4460 log.go:172] (0xc0008cea50) Data frame received for 1\nI0312 00:50:21.208622 4460 log.go:172] (0xc0008aa140) (1) Data frame handling\nI0312 00:50:21.208633 4460 log.go:172] (0xc0008aa140) (1) Data frame sent\nI0312 00:50:21.208648 4460 log.go:172] (0xc0008cea50) (0xc0008aa140) Stream removed, broadcasting: 1\nI0312 00:50:21.208660 4460 log.go:172] (0xc0008cea50) Go away received\nI0312 00:50:21.208952 4460 log.go:172] (0xc0008cea50) (0xc0008aa140) Stream removed, broadcasting: 1\nI0312 00:50:21.208967 4460 log.go:172] (0xc0008cea50) (0xc000a34000) Stream removed, broadcasting: 3\nI0312 00:50:21.208974 4460 log.go:172] (0xc0008cea50) (0xc0006a3b80) Stream removed, broadcasting: 5\n" Mar 12 00:50:21.211: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 00:50:21.211: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 00:50:21.211: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 12 00:50:51.222: INFO: Deleting all statefulset in ns statefulset-5867 Mar 12 00:50:51.224: INFO: Scaling statefulset ss to 0 Mar 12 00:50:51.230: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 00:50:51.231: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:50:51.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5867" for this suite. • [SLOW TEST:93.523 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":268,"skipped":4406,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:50:51.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 12 00:50:51.331: INFO: Waiting up to 5m0s for pod "pod-c1e8b915-868e-4b5d-a581-196904b12323" in namespace "emptydir-2853" to be "success or failure" Mar 12 00:50:51.333: INFO: Pod "pod-c1e8b915-868e-4b5d-a581-196904b12323": Phase="Pending", Reason="", readiness=false. Elapsed: 1.715494ms Mar 12 00:50:53.336: INFO: Pod "pod-c1e8b915-868e-4b5d-a581-196904b12323": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004631112s Mar 12 00:50:55.339: INFO: Pod "pod-c1e8b915-868e-4b5d-a581-196904b12323": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007606575s STEP: Saw pod success Mar 12 00:50:55.339: INFO: Pod "pod-c1e8b915-868e-4b5d-a581-196904b12323" satisfied condition "success or failure" Mar 12 00:50:55.340: INFO: Trying to get logs from node latest-worker pod pod-c1e8b915-868e-4b5d-a581-196904b12323 container test-container: STEP: delete the pod Mar 12 00:50:55.366: INFO: Waiting for pod pod-c1e8b915-868e-4b5d-a581-196904b12323 to disappear Mar 12 00:50:55.371: INFO: Pod pod-c1e8b915-868e-4b5d-a581-196904b12323 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:50:55.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2853" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":269,"skipped":4414,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:50:55.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 00:50:57.487: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:50:57.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9269" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":270,"skipped":4423,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:50:57.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 00:50:58.200: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 00:51:00.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719571058, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719571058, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719571058, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719571058, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 00:51:03.232: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:51:03.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3292-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:51:04.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3272" for this suite. STEP: Destroying namespace "webhook-3272-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.112 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":271,"skipped":4429,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:51:04.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Mar 12 00:51:04.660: INFO: Waiting up to 5m0s for pod "downward-api-3821aaf9-e934-4b17-8dad-3f3d5ab76814" in namespace "downward-api-6929" to be "success or failure" Mar 12 00:51:04.664: INFO: Pod "downward-api-3821aaf9-e934-4b17-8dad-3f3d5ab76814": Phase="Pending", Reason="", readiness=false. Elapsed: 4.423138ms Mar 12 00:51:06.679: INFO: Pod "downward-api-3821aaf9-e934-4b17-8dad-3f3d5ab76814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019004532s STEP: Saw pod success Mar 12 00:51:06.679: INFO: Pod "downward-api-3821aaf9-e934-4b17-8dad-3f3d5ab76814" satisfied condition "success or failure" Mar 12 00:51:06.681: INFO: Trying to get logs from node latest-worker pod downward-api-3821aaf9-e934-4b17-8dad-3f3d5ab76814 container dapi-container: STEP: delete the pod Mar 12 00:51:06.695: INFO: Waiting for pod downward-api-3821aaf9-e934-4b17-8dad-3f3d5ab76814 to disappear Mar 12 00:51:06.700: INFO: Pod downward-api-3821aaf9-e934-4b17-8dad-3f3d5ab76814 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:51:06.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6929" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":272,"skipped":4463,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:51:06.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-c6103f18-0710-43dd-bac5-6902d0d933aa STEP: Creating a pod to test consume secrets Mar 12 00:51:06.792: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7aebb3e6-7e76-4131-b49d-8e340a9a366c" in namespace "projected-2945" to be "success or failure" Mar 12 00:51:06.794: INFO: Pod "pod-projected-secrets-7aebb3e6-7e76-4131-b49d-8e340a9a366c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.618522ms Mar 12 00:51:08.817: INFO: Pod "pod-projected-secrets-7aebb3e6-7e76-4131-b49d-8e340a9a366c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024670472s STEP: Saw pod success Mar 12 00:51:08.817: INFO: Pod "pod-projected-secrets-7aebb3e6-7e76-4131-b49d-8e340a9a366c" satisfied condition "success or failure" Mar 12 00:51:08.819: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7aebb3e6-7e76-4131-b49d-8e340a9a366c container projected-secret-volume-test: STEP: delete the pod Mar 12 00:51:08.850: INFO: Waiting for pod pod-projected-secrets-7aebb3e6-7e76-4131-b49d-8e340a9a366c to disappear Mar 12 00:51:08.859: INFO: Pod pod-projected-secrets-7aebb3e6-7e76-4131-b49d-8e340a9a366c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:51:08.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2945" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4465,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:51:08.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-dafa5888-2a4a-431c-bf3b-8bc86fc0a8fa STEP: Creating a pod to test consume configMaps Mar 12 00:51:08.939: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9be360a-da45-4365-8eb2-2148ce44b6f1" in namespace "configmap-2997" to be "success or failure" Mar 12 00:51:08.944: INFO: Pod "pod-configmaps-e9be360a-da45-4365-8eb2-2148ce44b6f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.805962ms Mar 12 00:51:10.947: INFO: Pod "pod-configmaps-e9be360a-da45-4365-8eb2-2148ce44b6f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008571906s STEP: Saw pod success Mar 12 00:51:10.948: INFO: Pod "pod-configmaps-e9be360a-da45-4365-8eb2-2148ce44b6f1" satisfied condition "success or failure" Mar 12 00:51:10.951: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e9be360a-da45-4365-8eb2-2148ce44b6f1 container configmap-volume-test: STEP: delete the pod Mar 12 00:51:10.969: INFO: Waiting for pod pod-configmaps-e9be360a-da45-4365-8eb2-2148ce44b6f1 to disappear Mar 12 00:51:10.973: INFO: Pod pod-configmaps-e9be360a-da45-4365-8eb2-2148ce44b6f1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:51:10.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2997" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":274,"skipped":4509,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:51:10.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:51:11.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1903" for this suite. STEP: Destroying namespace "nspatchtest-0d4accaa-9afa-4cd3-a432-298c298f966d-8231" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":275,"skipped":4528,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:51:11.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:51:11.195: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 12 00:51:16.210: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 12 00:51:16.210: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 12 00:51:16.250: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5743 /apis/apps/v1/namespaces/deployment-5743/deployments/test-cleanup-deployment 3f0d4fa4-adf7-4aef-9380-974f2d335eb8 952309 1 2020-03-12 00:51:16 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f56c58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 12 00:51:16.258: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-5743 /apis/apps/v1/namespaces/deployment-5743/replicasets/test-cleanup-deployment-55ffc6b7b6 7aef1d91-b972-4d79-8b74-4f5bdec9578c 952311 1 2020-03-12 00:51:16 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 3f0d4fa4-adf7-4aef-9380-974f2d335eb8 0xc001f57047 0xc001f57048}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f570b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 00:51:16.258: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 12 00:51:16.258: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5743 /apis/apps/v1/namespaces/deployment-5743/replicasets/test-cleanup-controller e1e84afd-91f4-44c7-92a6-d6626b1c8b15 952310 1 2020-03-12 00:51:11 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 3f0d4fa4-adf7-4aef-9380-974f2d335eb8 0xc001f56f77 0xc001f56f78}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001f56fd8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 12 00:51:16.303: INFO: Pod "test-cleanup-controller-jt2mp" is available: &Pod{ObjectMeta:{test-cleanup-controller-jt2mp test-cleanup-controller- deployment-5743 /api/v1/namespaces/deployment-5743/pods/test-cleanup-controller-jt2mp f7e39c1d-57c3-4e9b-8ac2-b0c9518fce85 952278 0 2020-03-12 00:51:11 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller e1e84afd-91f4-44c7-92a6-d6626b1c8b15 0xc001f574e7 0xc001f574e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5j2ml,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5j2ml,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5j2ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:51:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:51:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:51:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:51:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.216,StartTime:2020-03-12 00:51:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 00:51:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cc6e57999488bc91d3c13ab22a12559eba80123bfe402d3a579dd2d2e061a9d3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.216,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 00:51:16.304: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-knn7p" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-knn7p test-cleanup-deployment-55ffc6b7b6- deployment-5743 /api/v1/namespaces/deployment-5743/pods/test-cleanup-deployment-55ffc6b7b6-knn7p 02ce12f6-830f-48d5-a87b-941499f39429 952316 0 2020-03-12 00:51:16 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 7aef1d91-b972-4d79-8b74-4f5bdec9578c 0xc001f57677 0xc001f57678}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5j2ml,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5j2ml,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5j2ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 00:51:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:51:16.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5743" for this suite. • [SLOW TEST:5.209 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":276,"skipped":4541,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:51:16.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating pod Mar 12 00:51:18.450: INFO: Pod pod-hostip-c46fe448-b3ad-4388-ac84-935a06c578b4 has hostIP: 172.17.0.18 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:51:18.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7584" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":277,"skipped":4541,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:51:18.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 12 00:51:18.553: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8140 /api/v1/namespaces/watch-8140/configmaps/e2e-watch-test-label-changed ff9ef5ec-a476-4658-9bf0-4d3d5d8881d8 952364 0 2020-03-12 00:51:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:51:18.553: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8140 /api/v1/namespaces/watch-8140/configmaps/e2e-watch-test-label-changed ff9ef5ec-a476-4658-9bf0-4d3d5d8881d8 952365 0 2020-03-12 00:51:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:51:18.553: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8140 /api/v1/namespaces/watch-8140/configmaps/e2e-watch-test-label-changed ff9ef5ec-a476-4658-9bf0-4d3d5d8881d8 952366 0 2020-03-12 00:51:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 12 00:51:28.587: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8140 /api/v1/namespaces/watch-8140/configmaps/e2e-watch-test-label-changed ff9ef5ec-a476-4658-9bf0-4d3d5d8881d8 952425 0 2020-03-12 00:51:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:51:28.587: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8140 /api/v1/namespaces/watch-8140/configmaps/e2e-watch-test-label-changed ff9ef5ec-a476-4658-9bf0-4d3d5d8881d8 952426 0 2020-03-12 00:51:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 12 00:51:28.587: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8140 /api/v1/namespaces/watch-8140/configmaps/e2e-watch-test-label-changed ff9ef5ec-a476-4658-9bf0-4d3d5d8881d8 952427 0 2020-03-12 00:51:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:51:28.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8140" for this suite. • [SLOW TEST:10.138 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":278,"skipped":4546,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Mar 12 00:51:28.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Mar 12 00:51:28.648: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 12 00:51:30.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-174 create -f -' Mar 12 00:51:32.616: INFO: stderr: "" Mar 12 00:51:32.616: INFO: stdout: "e2e-test-crd-publish-openapi-8969-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 12 00:51:32.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-174 delete e2e-test-crd-publish-openapi-8969-crds test-cr' Mar 12 00:51:32.725: INFO: stderr: "" Mar 12 00:51:32.725: INFO: stdout: "e2e-test-crd-publish-openapi-8969-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 12 00:51:32.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-174 apply -f -' Mar 12 00:51:32.998: INFO: stderr: "" Mar 12 00:51:32.998: INFO: stdout: "e2e-test-crd-publish-openapi-8969-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 12 00:51:32.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-174 delete e2e-test-crd-publish-openapi-8969-crds test-cr' Mar 12 00:51:33.114: INFO: stderr: "" Mar 12 00:51:33.115: INFO: stdout: "e2e-test-crd-publish-openapi-8969-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 12 00:51:33.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8969-crds' Mar 12 00:51:33.347: INFO: stderr: "" Mar 12 00:51:33.347: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8969-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Mar 12 00:51:35.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-174" for this suite. • [SLOW TEST:6.527 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":279,"skipped":4555,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSMar 12 00:51:35.122: INFO: Running AfterSuite actions on all nodes Mar 12 00:51:35.122: INFO: Running AfterSuite actions on node 1 Mar 12 00:51:35.122: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":280,"completed":279,"skipped":4565,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:762 Ran 280 of 4845 Specs in 4552.493 seconds FAIL! -- 279 Passed | 1 Failed | 0 Pending | 4565 Skipped --- FAIL: TestE2E (4552.60s) FAIL