I0312 19:03:43.208562 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0312 19:03:43.208805 6 e2e.go:109] Starting e2e run "c212bd9b-05e1-49f9-850c-bcbb3611cb19" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584039822 - Will randomize all specs Will run 278 of 4843 specs Mar 12 19:03:43.269: INFO: >>> kubeConfig: /root/.kube/config Mar 12 19:03:43.272: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 12 19:03:43.286: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 12 19:03:43.313: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 12 19:03:43.313: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 12 19:03:43.313: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 12 19:03:43.322: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 12 19:03:43.322: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 12 19:03:43.322: INFO: e2e test version: v1.17.3 Mar 12 19:03:43.323: INFO: kube-apiserver version: v1.17.2 Mar 12 19:03:43.323: INFO: >>> kubeConfig: /root/.kube/config Mar 12 19:03:43.327: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:03:43.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Mar 12 19:03:43.360: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-58gr STEP: Creating a pod to test atomic-volume-subpath Mar 12 19:03:43.410: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-58gr" in namespace "subpath-2003" to be "success or failure" Mar 12 19:03:43.414: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205114ms Mar 12 19:03:45.418: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Running", Reason="", readiness=true. Elapsed: 2.00846957s Mar 12 19:03:47.422: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Running", Reason="", readiness=true. Elapsed: 4.012120355s Mar 12 19:03:49.425: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Running", Reason="", readiness=true. Elapsed: 6.01594236s Mar 12 19:03:51.429: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Running", Reason="", readiness=true. Elapsed: 8.019310589s Mar 12 19:03:53.432: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Running", Reason="", readiness=true. Elapsed: 10.022803311s Mar 12 19:03:55.436: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Running", Reason="", readiness=true. Elapsed: 12.026805391s Mar 12 19:03:57.440: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Running", Reason="", readiness=true. Elapsed: 14.030523875s Mar 12 19:03:59.444: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Running", Reason="", readiness=true. Elapsed: 16.034189359s Mar 12 19:04:01.448: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Running", Reason="", readiness=true. Elapsed: 18.038036954s Mar 12 19:04:03.451: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Running", Reason="", readiness=true. Elapsed: 20.04137148s Mar 12 19:04:05.454: INFO: Pod "pod-subpath-test-secret-58gr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.044884579s STEP: Saw pod success Mar 12 19:04:05.454: INFO: Pod "pod-subpath-test-secret-58gr" satisfied condition "success or failure" Mar 12 19:04:05.457: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-58gr container test-container-subpath-secret-58gr: STEP: delete the pod Mar 12 19:04:05.520: INFO: Waiting for pod pod-subpath-test-secret-58gr to disappear Mar 12 19:04:05.537: INFO: Pod pod-subpath-test-secret-58gr no longer exists STEP: Deleting pod pod-subpath-test-secret-58gr Mar 12 19:04:05.537: INFO: Deleting pod "pod-subpath-test-secret-58gr" in namespace "subpath-2003" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:04:05.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2003" for this suite. • [SLOW TEST:22.220 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":1,"skipped":34,"failed":0} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:04:05.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-4869 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4869 STEP: Deleting pre-stop pod Mar 12 19:04:14.636: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:04:14.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4869" for this suite. • [SLOW TEST:9.153 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":2,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:04:14.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:04:14.778: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 12 19:04:17.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8138 create -f -' Mar 12 19:04:19.522: INFO: stderr: "" Mar 12 19:04:19.522: INFO: stdout: "e2e-test-crd-publish-openapi-2016-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 12 19:04:19.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8138 delete e2e-test-crd-publish-openapi-2016-crds test-cr' Mar 12 19:04:19.627: INFO: stderr: "" Mar 12 19:04:19.628: INFO: stdout: "e2e-test-crd-publish-openapi-2016-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 12 19:04:19.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8138 apply -f -' Mar 12 19:04:19.887: INFO: stderr: "" Mar 12 19:04:19.887: INFO: stdout: "e2e-test-crd-publish-openapi-2016-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 12 19:04:19.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8138 delete e2e-test-crd-publish-openapi-2016-crds test-cr' Mar 12 19:04:19.987: INFO: stderr: "" Mar 12 19:04:19.987: INFO: stdout: "e2e-test-crd-publish-openapi-2016-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 12 19:04:19.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2016-crds' Mar 12 19:04:20.167: INFO: stderr: "" Mar 12 19:04:20.167: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2016-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:04:22.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8138" for this suite. • [SLOW TEST:8.281 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":3,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:04:22.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:04:23.056: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:04:24.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5526" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":4,"skipped":81,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:04:24.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-16bf0e3a-670f-405c-99b6-180bbdc24d5b STEP: Creating a pod to test consume configMaps Mar 12 19:04:24.558: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-88c125a4-28a5-4a3e-b405-0aa15bfc756f" in namespace "projected-763" to be "success or failure" Mar 12 19:04:24.563: INFO: Pod "pod-projected-configmaps-88c125a4-28a5-4a3e-b405-0aa15bfc756f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350989ms Mar 12 19:04:26.565: INFO: Pod "pod-projected-configmaps-88c125a4-28a5-4a3e-b405-0aa15bfc756f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00729744s STEP: Saw pod success Mar 12 19:04:26.566: INFO: Pod "pod-projected-configmaps-88c125a4-28a5-4a3e-b405-0aa15bfc756f" satisfied condition "success or failure" Mar 12 19:04:26.567: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-88c125a4-28a5-4a3e-b405-0aa15bfc756f container projected-configmap-volume-test: STEP: delete the pod Mar 12 19:04:26.592: INFO: Waiting for pod pod-projected-configmaps-88c125a4-28a5-4a3e-b405-0aa15bfc756f to disappear Mar 12 19:04:26.620: INFO: Pod pod-projected-configmaps-88c125a4-28a5-4a3e-b405-0aa15bfc756f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:04:26.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-763" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":93,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:04:26.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-7c36e2b5-506c-4176-8ea3-03be22144d15 STEP: Creating a pod to test consume secrets Mar 12 19:04:26.684: INFO: Waiting up to 5m0s for pod "pod-secrets-e21c6018-3beb-47a2-9a77-360d8f43831b" in namespace "secrets-5476" to be "success or failure" Mar 12 19:04:26.688: INFO: Pod "pod-secrets-e21c6018-3beb-47a2-9a77-360d8f43831b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142061ms Mar 12 19:04:28.692: INFO: Pod "pod-secrets-e21c6018-3beb-47a2-9a77-360d8f43831b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007945573s STEP: Saw pod success Mar 12 19:04:28.692: INFO: Pod "pod-secrets-e21c6018-3beb-47a2-9a77-360d8f43831b" satisfied condition "success or failure" Mar 12 19:04:28.694: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-e21c6018-3beb-47a2-9a77-360d8f43831b container secret-env-test: STEP: delete the pod Mar 12 19:04:28.726: INFO: Waiting for pod pod-secrets-e21c6018-3beb-47a2-9a77-360d8f43831b to disappear Mar 12 19:04:28.731: INFO: Pod pod-secrets-e21c6018-3beb-47a2-9a77-360d8f43831b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:04:28.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5476" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":96,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:04:28.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 12 19:04:34.890: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 19:04:34.907: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 19:04:36.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 19:04:36.911: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 19:04:38.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 19:04:38.911: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 19:04:40.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 19:04:40.911: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 19:04:42.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 19:04:42.910: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 19:04:44.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 19:04:44.932: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 19:04:46.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 19:04:46.911: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:04:46.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3071" for this suite. • [SLOW TEST:18.182 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":99,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:04:46.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:04:47.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5329" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":8,"skipped":112,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:04:47.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d7d36767-39f5-464d-8fb9-ded6026de272 STEP: Creating a pod to test consume secrets Mar 12 19:04:47.156: INFO: Waiting up to 5m0s for pod "pod-secrets-26bb130b-b388-4b16-abcd-05a1e5f4dd8b" in namespace "secrets-1507" to be "success or failure" Mar 12 19:04:47.162: INFO: Pod "pod-secrets-26bb130b-b388-4b16-abcd-05a1e5f4dd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404776ms Mar 12 19:04:49.166: INFO: Pod "pod-secrets-26bb130b-b388-4b16-abcd-05a1e5f4dd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010350233s Mar 12 19:04:51.170: INFO: Pod "pod-secrets-26bb130b-b388-4b16-abcd-05a1e5f4dd8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014044116s STEP: Saw pod success Mar 12 19:04:51.170: INFO: Pod "pod-secrets-26bb130b-b388-4b16-abcd-05a1e5f4dd8b" satisfied condition "success or failure" Mar 12 19:04:51.173: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-26bb130b-b388-4b16-abcd-05a1e5f4dd8b container secret-volume-test: STEP: delete the pod Mar 12 19:04:51.206: INFO: Waiting for pod pod-secrets-26bb130b-b388-4b16-abcd-05a1e5f4dd8b to disappear Mar 12 19:04:51.210: INFO: Pod pod-secrets-26bb130b-b388-4b16-abcd-05a1e5f4dd8b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:04:51.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1507" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":121,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:04:51.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 12 19:04:55.277: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-290 PodName:pod-sharedvolume-df92bdc2-4b09-4bb2-80af-c7e776963287 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:04:55.277: INFO: >>> kubeConfig: /root/.kube/config I0312 19:04:55.310779 6 log.go:172] (0xc004f3a630) (0xc0023cafa0) Create stream I0312 19:04:55.310810 6 log.go:172] (0xc004f3a630) (0xc0023cafa0) Stream added, broadcasting: 1 I0312 19:04:55.313133 6 log.go:172] (0xc004f3a630) Reply frame received for 1 I0312 19:04:55.313167 6 log.go:172] (0xc004f3a630) (0xc0023cb040) Create stream I0312 19:04:55.313178 6 log.go:172] (0xc004f3a630) (0xc0023cb040) Stream added, broadcasting: 3 I0312 19:04:55.313922 6 log.go:172] (0xc004f3a630) Reply frame received for 3 I0312 19:04:55.313954 6 log.go:172] (0xc004f3a630) (0xc002412320) Create stream I0312 19:04:55.313967 6 log.go:172] (0xc004f3a630) (0xc002412320) Stream added, broadcasting: 5 I0312 19:04:55.314811 6 log.go:172] (0xc004f3a630) Reply frame received for 5 I0312 19:04:55.368859 6 log.go:172] (0xc004f3a630) Data frame received for 5 I0312 19:04:55.368911 6 log.go:172] (0xc004f3a630) Data frame received for 3 I0312 19:04:55.368940 6 log.go:172] (0xc0023cb040) (3) Data frame handling I0312 19:04:55.368969 6 log.go:172] (0xc0023cb040) (3) Data frame sent I0312 19:04:55.368978 6 log.go:172] (0xc004f3a630) Data frame received for 3 I0312 19:04:55.368984 6 log.go:172] (0xc0023cb040) (3) Data frame handling I0312 19:04:55.369008 6 log.go:172] (0xc002412320) (5) Data frame handling I0312 19:04:55.370153 6 log.go:172] (0xc004f3a630) Data frame received for 1 I0312 19:04:55.370178 6 log.go:172] (0xc0023cafa0) (1) Data frame handling I0312 19:04:55.370195 6 log.go:172] (0xc0023cafa0) (1) Data frame sent I0312 19:04:55.370232 6 log.go:172] (0xc004f3a630) (0xc0023cafa0) Stream removed, broadcasting: 1 I0312 19:04:55.370476 6 log.go:172] (0xc004f3a630) (0xc0023cafa0) Stream removed, broadcasting: 1 I0312 19:04:55.370493 6 log.go:172] (0xc004f3a630) (0xc0023cb040) Stream removed, broadcasting: 3 I0312 19:04:55.370503 6 log.go:172] (0xc004f3a630) (0xc002412320) Stream removed, broadcasting: 5 Mar 12 19:04:55.370: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:04:55.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0312 19:04:55.370743 6 log.go:172] (0xc004f3a630) Go away received STEP: Destroying namespace "emptydir-290" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":10,"skipped":133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:04:55.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:04:55.443: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 12 19:04:57.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3295 create -f -' Mar 12 19:04:59.210: INFO: stderr: "" Mar 12 19:04:59.210: INFO: stdout: "e2e-test-crd-publish-openapi-405-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 12 19:04:59.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3295 delete e2e-test-crd-publish-openapi-405-crds test-foo' Mar 12 19:04:59.329: INFO: stderr: "" Mar 12 19:04:59.329: INFO: stdout: "e2e-test-crd-publish-openapi-405-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 12 19:04:59.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3295 apply -f -' Mar 12 19:04:59.760: INFO: stderr: "" Mar 12 19:04:59.760: INFO: stdout: "e2e-test-crd-publish-openapi-405-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 12 19:04:59.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3295 delete e2e-test-crd-publish-openapi-405-crds test-foo' Mar 12 19:04:59.928: INFO: stderr: "" Mar 12 19:04:59.928: INFO: stdout: "e2e-test-crd-publish-openapi-405-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 12 19:04:59.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3295 create -f -' Mar 12 19:05:00.171: INFO: rc: 1 Mar 12 19:05:00.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3295 apply -f -' Mar 12 19:05:00.383: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 12 19:05:00.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3295 create -f -' Mar 12 19:05:00.581: INFO: rc: 1 Mar 12 19:05:00.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3295 apply -f -' Mar 12 19:05:00.757: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 12 19:05:00.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-405-crds' Mar 12 19:05:01.019: INFO: stderr: "" Mar 12 19:05:01.019: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-405-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 12 19:05:01.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-405-crds.metadata' Mar 12 19:05:01.246: INFO: stderr: "" Mar 12 19:05:01.247: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-405-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 12 19:05:01.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-405-crds.spec' Mar 12 19:05:01.444: INFO: stderr: "" Mar 12 19:05:01.444: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-405-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 12 19:05:01.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-405-crds.spec.bars' Mar 12 19:05:01.642: INFO: stderr: "" Mar 12 19:05:01.642: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-405-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 12 19:05:01.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-405-crds.spec.bars2' Mar 12 19:05:01.852: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:05:04.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3295" for this suite. • [SLOW TEST:9.306 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":11,"skipped":169,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:05:04.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 12 19:05:04.717: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 19:05:04.741: INFO: Waiting for terminating namespaces to be deleted... Mar 12 19:05:04.744: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 12 19:05:04.749: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:05:04.749: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 19:05:04.749: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:05:04.749: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 19:05:04.749: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 12 19:05:04.753: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:05:04.753: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 19:05:04.753: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:05:04.753: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fba3e69b1d29e0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:05:05.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-473" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":12,"skipped":181,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:05:05.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 12 19:05:05.901: INFO: Waiting up to 5m0s for pod "pod-171c9897-57a1-4c4e-a7cf-7c2b2ac240e9" in namespace "emptydir-8305" to be "success or failure" Mar 12 19:05:05.905: INFO: Pod "pod-171c9897-57a1-4c4e-a7cf-7c2b2ac240e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.496734ms Mar 12 19:05:07.909: INFO: Pod "pod-171c9897-57a1-4c4e-a7cf-7c2b2ac240e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007505555s STEP: Saw pod success Mar 12 19:05:07.909: INFO: Pod "pod-171c9897-57a1-4c4e-a7cf-7c2b2ac240e9" satisfied condition "success or failure" Mar 12 19:05:07.912: INFO: Trying to get logs from node jerma-worker2 pod pod-171c9897-57a1-4c4e-a7cf-7c2b2ac240e9 container test-container: STEP: delete the pod Mar 12 19:05:07.937: INFO: Waiting for pod pod-171c9897-57a1-4c4e-a7cf-7c2b2ac240e9 to disappear Mar 12 19:05:07.947: INFO: Pod pod-171c9897-57a1-4c4e-a7cf-7c2b2ac240e9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:05:07.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8305" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":197,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:05:07.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:05:08.948: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:05:12.021: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:05:12.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9117" for this suite. STEP: Destroying namespace "webhook-9117-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":14,"skipped":200,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:05:12.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5020 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5020 STEP: creating replication controller externalsvc in namespace services-5020 I0312 19:05:12.260861 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5020, replica count: 2 I0312 19:05:15.311156 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 12 19:05:15.346: INFO: Creating new exec pod Mar 12 19:05:17.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5020 execpodjj5gp -- /bin/sh -x -c nslookup clusterip-service' Mar 12 19:05:17.557: INFO: stderr: "I0312 19:05:17.469626 425 log.go:172] (0xc000a88fd0) (0xc000aac0a0) Create stream\nI0312 19:05:17.469741 425 log.go:172] (0xc000a88fd0) (0xc000aac0a0) Stream added, broadcasting: 1\nI0312 19:05:17.473863 425 log.go:172] (0xc000a88fd0) Reply frame received for 1\nI0312 19:05:17.473903 425 log.go:172] (0xc000a88fd0) (0xc000a30000) Create stream\nI0312 19:05:17.473912 425 log.go:172] (0xc000a88fd0) (0xc000a30000) Stream added, broadcasting: 3\nI0312 19:05:17.474911 425 log.go:172] (0xc000a88fd0) Reply frame received for 3\nI0312 19:05:17.474961 425 log.go:172] (0xc000a88fd0) (0xc0009d8000) Create stream\nI0312 19:05:17.474973 425 log.go:172] (0xc000a88fd0) (0xc0009d8000) Stream added, broadcasting: 5\nI0312 19:05:17.475950 425 log.go:172] (0xc000a88fd0) Reply frame received for 5\nI0312 19:05:17.543366 425 log.go:172] (0xc000a88fd0) Data frame received for 5\nI0312 19:05:17.543383 425 log.go:172] (0xc0009d8000) (5) Data frame handling\nI0312 19:05:17.543395 425 log.go:172] (0xc0009d8000) (5) Data frame sent\n+ nslookup clusterip-service\nI0312 19:05:17.549204 425 log.go:172] (0xc000a88fd0) Data frame received for 3\nI0312 19:05:17.549261 425 log.go:172] (0xc000a30000) (3) Data frame handling\nI0312 19:05:17.549291 425 log.go:172] (0xc000a30000) (3) Data frame sent\nI0312 19:05:17.550687 425 log.go:172] (0xc000a88fd0) Data frame received for 3\nI0312 19:05:17.550704 425 log.go:172] (0xc000a30000) (3) Data frame handling\nI0312 19:05:17.550717 425 log.go:172] (0xc000a30000) (3) Data frame sent\nI0312 19:05:17.551298 425 log.go:172] (0xc000a88fd0) Data frame received for 3\nI0312 19:05:17.551345 425 log.go:172] (0xc000a30000) (3) Data frame handling\nI0312 19:05:17.551366 425 log.go:172] (0xc000a88fd0) Data frame received for 5\nI0312 19:05:17.551375 425 log.go:172] (0xc0009d8000) (5) Data frame handling\nI0312 19:05:17.552881 425 log.go:172] (0xc000a88fd0) Data frame received for 1\nI0312 19:05:17.552900 425 log.go:172] (0xc000aac0a0) (1) Data frame handling\nI0312 19:05:17.552912 425 log.go:172] (0xc000aac0a0) (1) Data frame sent\nI0312 19:05:17.552928 425 log.go:172] (0xc000a88fd0) (0xc000aac0a0) Stream removed, broadcasting: 1\nI0312 19:05:17.553233 425 log.go:172] (0xc000a88fd0) (0xc000aac0a0) Stream removed, broadcasting: 1\nI0312 19:05:17.553249 425 log.go:172] (0xc000a88fd0) (0xc000a30000) Stream removed, broadcasting: 3\nI0312 19:05:17.553259 425 log.go:172] (0xc000a88fd0) (0xc0009d8000) Stream removed, broadcasting: 5\n" Mar 12 19:05:17.557: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5020.svc.cluster.local\tcanonical name = externalsvc.services-5020.svc.cluster.local.\nName:\texternalsvc.services-5020.svc.cluster.local\nAddress: 10.106.69.23\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5020, will wait for the garbage collector to delete the pods Mar 12 19:05:17.615: INFO: Deleting ReplicationController externalsvc took: 4.5137ms Mar 12 19:05:17.715: INFO: Terminating ReplicationController externalsvc pods took: 100.234451ms Mar 12 19:05:26.166: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:05:26.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5020" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.061 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":15,"skipped":221,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:05:26.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:05:26.537: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:05:29.577: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:05:29.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5449" for this suite. STEP: Destroying namespace "webhook-5449-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":16,"skipped":222,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:05:29.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-6370daf9-ce00-4597-876a-744581641204 STEP: Creating a pod to test consume secrets Mar 12 19:05:29.872: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bcae0c83-7ca1-457f-9932-ac72bc31aff8" in namespace "projected-2978" to be "success or failure" Mar 12 19:05:29.877: INFO: Pod "pod-projected-secrets-bcae0c83-7ca1-457f-9932-ac72bc31aff8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.539149ms Mar 12 19:05:31.881: INFO: Pod "pod-projected-secrets-bcae0c83-7ca1-457f-9932-ac72bc31aff8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008730035s STEP: Saw pod success Mar 12 19:05:31.881: INFO: Pod "pod-projected-secrets-bcae0c83-7ca1-457f-9932-ac72bc31aff8" satisfied condition "success or failure" Mar 12 19:05:31.884: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-bcae0c83-7ca1-457f-9932-ac72bc31aff8 container projected-secret-volume-test: STEP: delete the pod Mar 12 19:05:31.903: INFO: Waiting for pod pod-projected-secrets-bcae0c83-7ca1-457f-9932-ac72bc31aff8 to disappear Mar 12 19:05:31.920: INFO: Pod pod-projected-secrets-bcae0c83-7ca1-457f-9932-ac72bc31aff8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:05:31.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2978" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":233,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:05:31.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0312 19:06:12.049917 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 19:06:12.049: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:06:12.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8745" for this suite. • [SLOW TEST:40.137 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":18,"skipped":233,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:06:12.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:12.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4358" for this suite. • [SLOW TEST:60.071 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":251,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:12.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 19:07:12.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4313' Mar 12 19:07:12.296: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 19:07:12.296: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738 Mar 12 19:07:14.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4313' Mar 12 19:07:14.464: INFO: stderr: "" Mar 12 19:07:14.464: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:14.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4313" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":20,"skipped":262,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:14.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 12 19:07:14.535: INFO: Waiting up to 5m0s for pod "pod-7e969f81-6ba9-4b76-947c-f200a34f817a" in namespace "emptydir-7010" to be "success or failure" Mar 12 19:07:14.539: INFO: Pod "pod-7e969f81-6ba9-4b76-947c-f200a34f817a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201237ms Mar 12 19:07:16.542: INFO: Pod "pod-7e969f81-6ba9-4b76-947c-f200a34f817a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006839567s STEP: Saw pod success Mar 12 19:07:16.542: INFO: Pod "pod-7e969f81-6ba9-4b76-947c-f200a34f817a" satisfied condition "success or failure" Mar 12 19:07:16.543: INFO: Trying to get logs from node jerma-worker2 pod pod-7e969f81-6ba9-4b76-947c-f200a34f817a container test-container: STEP: delete the pod Mar 12 19:07:16.635: INFO: Waiting for pod pod-7e969f81-6ba9-4b76-947c-f200a34f817a to disappear Mar 12 19:07:16.641: INFO: Pod pod-7e969f81-6ba9-4b76-947c-f200a34f817a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:16.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7010" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":282,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:16.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:07:16.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4574ee4f-4859-4c91-89ae-4aea19278e25" in namespace "downward-api-4755" to be "success or failure" Mar 12 19:07:16.704: INFO: Pod "downwardapi-volume-4574ee4f-4859-4c91-89ae-4aea19278e25": Phase="Pending", Reason="", readiness=false. Elapsed: 14.648941ms Mar 12 19:07:18.707: INFO: Pod "downwardapi-volume-4574ee4f-4859-4c91-89ae-4aea19278e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0175446s STEP: Saw pod success Mar 12 19:07:18.707: INFO: Pod "downwardapi-volume-4574ee4f-4859-4c91-89ae-4aea19278e25" satisfied condition "success or failure" Mar 12 19:07:18.709: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4574ee4f-4859-4c91-89ae-4aea19278e25 container client-container: STEP: delete the pod Mar 12 19:07:18.720: INFO: Waiting for pod downwardapi-volume-4574ee4f-4859-4c91-89ae-4aea19278e25 to disappear Mar 12 19:07:18.739: INFO: Pod downwardapi-volume-4574ee4f-4859-4c91-89ae-4aea19278e25 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:18.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4755" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":289,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:18.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-920efb7b-0aa6-4f5f-8067-32f291f34113 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:18.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4083" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":23,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:18.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 12 19:07:18.852: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:33.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6183" for this suite. • [SLOW TEST:15.042 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":24,"skipped":367,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:33.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-7772/secret-test-20bf7fe4-d2a6-4536-8804-1e6305eae8f1 STEP: Creating a pod to test consume secrets Mar 12 19:07:33.929: INFO: Waiting up to 5m0s for pod "pod-configmaps-92ef972f-74af-4882-8ed4-c5c7294a1c36" in namespace "secrets-7772" to be "success or failure" Mar 12 19:07:33.935: INFO: Pod "pod-configmaps-92ef972f-74af-4882-8ed4-c5c7294a1c36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083211ms Mar 12 19:07:35.939: INFO: Pod "pod-configmaps-92ef972f-74af-4882-8ed4-c5c7294a1c36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009479925s STEP: Saw pod success Mar 12 19:07:35.939: INFO: Pod "pod-configmaps-92ef972f-74af-4882-8ed4-c5c7294a1c36" satisfied condition "success or failure" Mar 12 19:07:35.941: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-92ef972f-74af-4882-8ed4-c5c7294a1c36 container env-test: STEP: delete the pod Mar 12 19:07:35.967: INFO: Waiting for pod pod-configmaps-92ef972f-74af-4882-8ed4-c5c7294a1c36 to disappear Mar 12 19:07:36.006: INFO: Pod pod-configmaps-92ef972f-74af-4882-8ed4-c5c7294a1c36 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:36.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7772" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:36.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:47.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-327" for this suite. • [SLOW TEST:11.100 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":26,"skipped":399,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:47.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 12 19:07:47.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-814 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 12 19:07:48.560: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0312 19:07:48.509858 482 log.go:172] (0xc000a49550) (0xc000663c20) Create stream\nI0312 19:07:48.509908 482 log.go:172] (0xc000a49550) (0xc000663c20) Stream added, broadcasting: 1\nI0312 19:07:48.512452 482 log.go:172] (0xc000a49550) Reply frame received for 1\nI0312 19:07:48.512522 482 log.go:172] (0xc000a49550) (0xc000664000) Create stream\nI0312 19:07:48.512541 482 log.go:172] (0xc000a49550) (0xc000664000) Stream added, broadcasting: 3\nI0312 19:07:48.513366 482 log.go:172] (0xc000a49550) Reply frame received for 3\nI0312 19:07:48.513395 482 log.go:172] (0xc000a49550) (0xc000663cc0) Create stream\nI0312 19:07:48.513406 482 log.go:172] (0xc000a49550) (0xc000663cc0) Stream added, broadcasting: 5\nI0312 19:07:48.514266 482 log.go:172] (0xc000a49550) Reply frame received for 5\nI0312 19:07:48.514295 482 log.go:172] (0xc000a49550) (0xc00066c000) Create stream\nI0312 19:07:48.514304 482 log.go:172] (0xc000a49550) (0xc00066c000) Stream added, broadcasting: 7\nI0312 19:07:48.515127 482 log.go:172] (0xc000a49550) Reply frame received for 7\nI0312 19:07:48.515270 482 log.go:172] (0xc000664000) (3) Writing data frame\nI0312 19:07:48.515370 482 log.go:172] (0xc000664000) (3) Writing data frame\nI0312 19:07:48.516156 482 log.go:172] (0xc000a49550) Data frame received for 5\nI0312 19:07:48.516171 482 log.go:172] (0xc000663cc0) (5) Data frame handling\nI0312 19:07:48.516181 482 log.go:172] (0xc000663cc0) (5) Data frame sent\nI0312 19:07:48.516576 482 log.go:172] (0xc000a49550) Data frame received for 5\nI0312 19:07:48.516595 482 log.go:172] (0xc000663cc0) (5) Data frame handling\nI0312 19:07:48.516607 482 log.go:172] (0xc000663cc0) (5) Data frame sent\nI0312 19:07:48.535480 482 log.go:172] (0xc000a49550) Data frame received for 5\nI0312 19:07:48.535736 482 log.go:172] (0xc000663cc0) (5) Data frame handling\nI0312 19:07:48.536144 482 log.go:172] (0xc000a49550) Data frame received for 7\nI0312 19:07:48.536196 482 log.go:172] (0xc00066c000) (7) Data frame handling\nI0312 19:07:48.536430 482 log.go:172] (0xc000a49550) Data frame received for 1\nI0312 19:07:48.536449 482 log.go:172] (0xc000663c20) (1) Data frame handling\nI0312 19:07:48.536459 482 log.go:172] (0xc000663c20) (1) Data frame sent\nI0312 19:07:48.536486 482 log.go:172] (0xc000a49550) (0xc000664000) Stream removed, broadcasting: 3\nI0312 19:07:48.536512 482 log.go:172] (0xc000a49550) (0xc000663c20) Stream removed, broadcasting: 1\nI0312 19:07:48.536523 482 log.go:172] (0xc000a49550) Go away received\nI0312 19:07:48.536989 482 log.go:172] (0xc000a49550) (0xc000663c20) Stream removed, broadcasting: 1\nI0312 19:07:48.537002 482 log.go:172] (0xc000a49550) (0xc000664000) Stream removed, broadcasting: 3\nI0312 19:07:48.537007 482 log.go:172] (0xc000a49550) (0xc000663cc0) Stream removed, broadcasting: 5\nI0312 19:07:48.537014 482 log.go:172] (0xc000a49550) (0xc00066c000) Stream removed, broadcasting: 7\n" Mar 12 19:07:48.560: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:50.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-814" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":27,"skipped":406,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:50.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:07:50.635: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.382702ms) Mar 12 19:07:50.637: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.620561ms) Mar 12 19:07:50.640: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.384341ms) Mar 12 19:07:50.642: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.358137ms) Mar 12 19:07:50.645: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.490733ms) Mar 12 19:07:50.648: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.707955ms) Mar 12 19:07:50.652: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.461155ms) Mar 12 19:07:50.655: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.714232ms) Mar 12 19:07:50.658: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.293214ms) Mar 12 19:07:50.661: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.032438ms) Mar 12 19:07:50.689: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 27.529618ms) Mar 12 19:07:50.692: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.891658ms) Mar 12 19:07:50.695: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.318467ms) Mar 12 19:07:50.698: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.055536ms) Mar 12 19:07:50.701: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.808492ms) Mar 12 19:07:50.704: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.879683ms) Mar 12 19:07:50.706: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.57692ms) Mar 12 19:07:50.709: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.397183ms) Mar 12 19:07:50.712: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.169998ms) Mar 12 19:07:50.715: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.84958ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:50.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7996" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":28,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:50.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 12 19:07:53.348: INFO: Successfully updated pod "adopt-release-9zcsl" STEP: Checking that the Job readopts the Pod Mar 12 19:07:53.348: INFO: Waiting up to 15m0s for pod "adopt-release-9zcsl" in namespace "job-7762" to be "adopted" Mar 12 19:07:53.350: INFO: Pod "adopt-release-9zcsl": Phase="Running", Reason="", readiness=true. Elapsed: 2.672786ms Mar 12 19:07:55.354: INFO: Pod "adopt-release-9zcsl": Phase="Running", Reason="", readiness=true. Elapsed: 2.005951305s Mar 12 19:07:55.354: INFO: Pod "adopt-release-9zcsl" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 12 19:07:55.861: INFO: Successfully updated pod "adopt-release-9zcsl" STEP: Checking that the Job releases the Pod Mar 12 19:07:55.861: INFO: Waiting up to 15m0s for pod "adopt-release-9zcsl" in namespace "job-7762" to be "released" Mar 12 19:07:55.863: INFO: Pod "adopt-release-9zcsl": Phase="Running", Reason="", readiness=true. Elapsed: 2.269535ms Mar 12 19:07:57.868: INFO: Pod "adopt-release-9zcsl": Phase="Running", Reason="", readiness=true. Elapsed: 2.007413625s Mar 12 19:07:57.868: INFO: Pod "adopt-release-9zcsl" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:57.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7762" for this suite. • [SLOW TEST:7.158 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":29,"skipped":433,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:07:57.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 19:07:59.977: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:07:59.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3733" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:08:00.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-g56np in namespace proxy-5843 I0312 19:08:00.149999 6 runners.go:189] Created replication controller with name: proxy-service-g56np, namespace: proxy-5843, replica count: 1 I0312 19:08:01.200461 6 runners.go:189] proxy-service-g56np Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0312 19:08:02.200722 6 runners.go:189] proxy-service-g56np Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 19:08:03.200873 6 runners.go:189] proxy-service-g56np Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 19:08:04.201088 6 runners.go:189] proxy-service-g56np Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 19:08:05.201304 6 runners.go:189] proxy-service-g56np Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 19:08:06.201508 6 runners.go:189] proxy-service-g56np Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 19:08:07.201782 6 runners.go:189] proxy-service-g56np Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 19:08:08.202030 6 runners.go:189] proxy-service-g56np Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 19:08:08.205: INFO: setup took 8.12549203s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 12 19:08:08.213: INFO: (0) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 8.494598ms) Mar 12 19:08:08.213: INFO: (0) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 8.783033ms) Mar 12 19:08:08.213: INFO: (0) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 8.737223ms) Mar 12 19:08:08.214: INFO: (0) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 8.995043ms) Mar 12 19:08:08.215: INFO: (0) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 10.598483ms) Mar 12 19:08:08.215: INFO: (0) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 10.648367ms) Mar 12 19:08:08.215: INFO: (0) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 10.8753ms) Mar 12 19:08:08.217: INFO: (0) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 12.624622ms) Mar 12 19:08:08.220: INFO: (0) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 15.45227ms) Mar 12 19:08:08.220: INFO: (0) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 15.70643ms) Mar 12 19:08:08.221: INFO: (0) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 15.757828ms) Mar 12 19:08:08.221: INFO: (0) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 16.172923ms) Mar 12 19:08:08.221: INFO: (0) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: ... (200; 5.808343ms) Mar 12 19:08:08.231: INFO: (1) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 6.607153ms) Mar 12 19:08:08.231: INFO: (1) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 6.734249ms) Mar 12 19:08:08.231: INFO: (1) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 6.637278ms) Mar 12 19:08:08.231: INFO: (1) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 6.296762ms) Mar 12 19:08:08.231: INFO: (1) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 6.585014ms) Mar 12 19:08:08.231: INFO: (1) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 6.7631ms) Mar 12 19:08:08.232: INFO: (1) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 7.223906ms) Mar 12 19:08:08.232: INFO: (1) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 8.036082ms) Mar 12 19:08:08.232: INFO: (1) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 8.555673ms) Mar 12 19:08:08.232: INFO: (1) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 7.606442ms) Mar 12 19:08:08.232: INFO: (1) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 8.171843ms) Mar 12 19:08:08.232: INFO: (1) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test<... (200; 4.864789ms) Mar 12 19:08:08.237: INFO: (2) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 5.30799ms) Mar 12 19:08:08.237: INFO: (2) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 5.356192ms) Mar 12 19:08:08.237: INFO: (2) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 5.38589ms) Mar 12 19:08:08.237: INFO: (2) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 5.409693ms) Mar 12 19:08:08.237: INFO: (2) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 5.452654ms) Mar 12 19:08:08.237: INFO: (2) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 5.445968ms) Mar 12 19:08:08.237: INFO: (2) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test<... (200; 4.533963ms) Mar 12 19:08:08.244: INFO: (3) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 4.807542ms) Mar 12 19:08:08.244: INFO: (3) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 5.220824ms) Mar 12 19:08:08.244: INFO: (3) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 5.49316ms) Mar 12 19:08:08.245: INFO: (3) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test (200; 5.609818ms) Mar 12 19:08:08.245: INFO: (3) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 5.599727ms) Mar 12 19:08:08.245: INFO: (3) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 5.629491ms) Mar 12 19:08:08.245: INFO: (3) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 5.641144ms) Mar 12 19:08:08.245: INFO: (3) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 5.678448ms) Mar 12 19:08:08.245: INFO: (3) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 5.723937ms) Mar 12 19:08:08.245: INFO: (3) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 6.017182ms) Mar 12 19:08:08.247: INFO: (4) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 2.290877ms) Mar 12 19:08:08.248: INFO: (4) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test (200; 4.31327ms) Mar 12 19:08:08.249: INFO: (4) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 4.428398ms) Mar 12 19:08:08.249: INFO: (4) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 4.468023ms) Mar 12 19:08:08.249: INFO: (4) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 4.447542ms) Mar 12 19:08:08.249: INFO: (4) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 4.465647ms) Mar 12 19:08:08.250: INFO: (4) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 4.555858ms) Mar 12 19:08:08.250: INFO: (4) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 4.549399ms) Mar 12 19:08:08.251: INFO: (4) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 5.617939ms) Mar 12 19:08:08.251: INFO: (4) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 5.848985ms) Mar 12 19:08:08.251: INFO: (4) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 5.880578ms) Mar 12 19:08:08.251: INFO: (4) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 5.866534ms) Mar 12 19:08:08.251: INFO: (4) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 5.997072ms) Mar 12 19:08:08.251: INFO: (4) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 6.047121ms) Mar 12 19:08:08.254: INFO: (5) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 2.967605ms) Mar 12 19:08:08.254: INFO: (5) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.109376ms) Mar 12 19:08:08.256: INFO: (5) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 5.15719ms) Mar 12 19:08:08.257: INFO: (5) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 5.710988ms) Mar 12 19:08:08.257: INFO: (5) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 6.15971ms) Mar 12 19:08:08.257: INFO: (5) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 6.086747ms) Mar 12 19:08:08.257: INFO: (5) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 6.234803ms) Mar 12 19:08:08.258: INFO: (5) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 6.301815ms) Mar 12 19:08:08.259: INFO: (5) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 7.521361ms) Mar 12 19:08:08.259: INFO: (5) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: ... (200; 7.555571ms) Mar 12 19:08:08.259: INFO: (5) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 7.738817ms) Mar 12 19:08:08.259: INFO: (5) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 7.836605ms) Mar 12 19:08:08.259: INFO: (5) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 7.917503ms) Mar 12 19:08:08.259: INFO: (5) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 8.093329ms) Mar 12 19:08:08.259: INFO: (5) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 8.292376ms) Mar 12 19:08:08.263: INFO: (6) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test<... (200; 4.651559ms) Mar 12 19:08:08.265: INFO: (6) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 5.181153ms) Mar 12 19:08:08.265: INFO: (6) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 5.385251ms) Mar 12 19:08:08.265: INFO: (6) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 5.376109ms) Mar 12 19:08:08.265: INFO: (6) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 5.50411ms) Mar 12 19:08:08.265: INFO: (6) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 5.567619ms) Mar 12 19:08:08.265: INFO: (6) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 5.598005ms) Mar 12 19:08:08.266: INFO: (6) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 6.115692ms) Mar 12 19:08:08.266: INFO: (6) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 6.621948ms) Mar 12 19:08:08.266: INFO: (6) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 6.654012ms) Mar 12 19:08:08.266: INFO: (6) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 6.681758ms) Mar 12 19:08:08.266: INFO: (6) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 6.629997ms) Mar 12 19:08:08.269: INFO: (7) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 2.566482ms) Mar 12 19:08:08.271: INFO: (7) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 4.457253ms) Mar 12 19:08:08.271: INFO: (7) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 5.039129ms) Mar 12 19:08:08.271: INFO: (7) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 4.976785ms) Mar 12 19:08:08.271: INFO: (7) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 4.971947ms) Mar 12 19:08:08.272: INFO: (7) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 5.193349ms) Mar 12 19:08:08.272: INFO: (7) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: ... (200; 5.174922ms) Mar 12 19:08:08.272: INFO: (7) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 5.277084ms) Mar 12 19:08:08.273: INFO: (7) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 6.754708ms) Mar 12 19:08:08.274: INFO: (7) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 7.088686ms) Mar 12 19:08:08.274: INFO: (7) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 7.192625ms) Mar 12 19:08:08.274: INFO: (7) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 7.162013ms) Mar 12 19:08:08.274: INFO: (7) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 7.302276ms) Mar 12 19:08:08.274: INFO: (7) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 7.301027ms) Mar 12 19:08:08.277: INFO: (8) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 2.920746ms) Mar 12 19:08:08.277: INFO: (8) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 2.969221ms) Mar 12 19:08:08.278: INFO: (8) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 3.801597ms) Mar 12 19:08:08.279: INFO: (8) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 5.015427ms) Mar 12 19:08:08.279: INFO: (8) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 4.977884ms) Mar 12 19:08:08.279: INFO: (8) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 5.027903ms) Mar 12 19:08:08.279: INFO: (8) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 4.976699ms) Mar 12 19:08:08.279: INFO: (8) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 5.026427ms) Mar 12 19:08:08.279: INFO: (8) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 5.095092ms) Mar 12 19:08:08.279: INFO: (8) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 5.207405ms) Mar 12 19:08:08.279: INFO: (8) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 5.233153ms) Mar 12 19:08:08.279: INFO: (8) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 5.235209ms) Mar 12 19:08:08.279: INFO: (8) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 5.290809ms) Mar 12 19:08:08.280: INFO: (8) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 5.832485ms) Mar 12 19:08:08.280: INFO: (8) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 5.905424ms) Mar 12 19:08:08.280: INFO: (8) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test (200; 4.102841ms) Mar 12 19:08:08.284: INFO: (9) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 4.050542ms) Mar 12 19:08:08.284: INFO: (9) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 4.318969ms) Mar 12 19:08:08.284: INFO: (9) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 4.445704ms) Mar 12 19:08:08.284: INFO: (9) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 4.395143ms) Mar 12 19:08:08.285: INFO: (9) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test<... (200; 5.902483ms) Mar 12 19:08:08.286: INFO: (9) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 6.107773ms) Mar 12 19:08:08.286: INFO: (9) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 6.113822ms) Mar 12 19:08:08.286: INFO: (9) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 6.127973ms) Mar 12 19:08:08.286: INFO: (9) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 6.41848ms) Mar 12 19:08:08.287: INFO: (9) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 6.780333ms) Mar 12 19:08:08.289: INFO: (10) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 2.525642ms) Mar 12 19:08:08.290: INFO: (10) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 3.240546ms) Mar 12 19:08:08.290: INFO: (10) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.27548ms) Mar 12 19:08:08.290: INFO: (10) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.201204ms) Mar 12 19:08:08.290: INFO: (10) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 3.235199ms) Mar 12 19:08:08.290: INFO: (10) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 3.459559ms) Mar 12 19:08:08.291: INFO: (10) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 4.249208ms) Mar 12 19:08:08.291: INFO: (10) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 4.300482ms) Mar 12 19:08:08.291: INFO: (10) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 4.389989ms) Mar 12 19:08:08.291: INFO: (10) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 4.486853ms) Mar 12 19:08:08.292: INFO: (10) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 4.744656ms) Mar 12 19:08:08.292: INFO: (10) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: ... (200; 4.777914ms) Mar 12 19:08:08.292: INFO: (10) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 4.80082ms) Mar 12 19:08:08.295: INFO: (11) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 3.537153ms) Mar 12 19:08:08.295: INFO: (11) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 2.950124ms) Mar 12 19:08:08.295: INFO: (11) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 2.947963ms) Mar 12 19:08:08.295: INFO: (11) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.417462ms) Mar 12 19:08:08.295: INFO: (11) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 3.248654ms) Mar 12 19:08:08.295: INFO: (11) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: ... (200; 3.80202ms) Mar 12 19:08:08.296: INFO: (11) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.825609ms) Mar 12 19:08:08.296: INFO: (11) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 4.210929ms) Mar 12 19:08:08.296: INFO: (11) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 3.896709ms) Mar 12 19:08:08.297: INFO: (11) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 5.17911ms) Mar 12 19:08:08.297: INFO: (11) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 5.319635ms) Mar 12 19:08:08.297: INFO: (11) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 4.887875ms) Mar 12 19:08:08.297: INFO: (11) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 4.924661ms) Mar 12 19:08:08.297: INFO: (11) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 5.168481ms) Mar 12 19:08:08.300: INFO: (12) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 3.002888ms) Mar 12 19:08:08.300: INFO: (12) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.024369ms) Mar 12 19:08:08.301: INFO: (12) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.33663ms) Mar 12 19:08:08.301: INFO: (12) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.14604ms) Mar 12 19:08:08.301: INFO: (12) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 3.376174ms) Mar 12 19:08:08.301: INFO: (12) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 3.761953ms) Mar 12 19:08:08.301: INFO: (12) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.809022ms) Mar 12 19:08:08.301: INFO: (12) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 3.725359ms) Mar 12 19:08:08.301: INFO: (12) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test (200; 3.586301ms) Mar 12 19:08:08.302: INFO: (12) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 4.16389ms) Mar 12 19:08:08.302: INFO: (12) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 4.533736ms) Mar 12 19:08:08.302: INFO: (12) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 4.875527ms) Mar 12 19:08:08.302: INFO: (12) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 4.887501ms) Mar 12 19:08:08.302: INFO: (12) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 4.918437ms) Mar 12 19:08:08.302: INFO: (12) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 4.861352ms) Mar 12 19:08:08.305: INFO: (13) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 2.222501ms) Mar 12 19:08:08.305: INFO: (13) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 2.33444ms) Mar 12 19:08:08.305: INFO: (13) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 2.45143ms) Mar 12 19:08:08.305: INFO: (13) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 2.564881ms) Mar 12 19:08:08.306: INFO: (13) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 3.041835ms) Mar 12 19:08:08.306: INFO: (13) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.136793ms) Mar 12 19:08:08.306: INFO: (13) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test (200; 3.584841ms) Mar 12 19:08:08.306: INFO: (13) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 3.806032ms) Mar 12 19:08:08.307: INFO: (13) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 4.128906ms) Mar 12 19:08:08.307: INFO: (13) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 4.155556ms) Mar 12 19:08:08.307: INFO: (13) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 4.138347ms) Mar 12 19:08:08.307: INFO: (13) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 4.198514ms) Mar 12 19:08:08.307: INFO: (13) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 4.212132ms) Mar 12 19:08:08.307: INFO: (13) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 4.270256ms) Mar 12 19:08:08.310: INFO: (14) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 3.15708ms) Mar 12 19:08:08.310: INFO: (14) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.315772ms) Mar 12 19:08:08.313: INFO: (14) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 6.227029ms) Mar 12 19:08:08.313: INFO: (14) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test (200; 7.154026ms) Mar 12 19:08:08.314: INFO: (14) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 7.225067ms) Mar 12 19:08:08.314: INFO: (14) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 7.220525ms) Mar 12 19:08:08.314: INFO: (14) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 7.157913ms) Mar 12 19:08:08.314: INFO: (14) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 7.184274ms) Mar 12 19:08:08.317: INFO: (15) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 2.98768ms) Mar 12 19:08:08.317: INFO: (15) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 3.18382ms) Mar 12 19:08:08.317: INFO: (15) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.256874ms) Mar 12 19:08:08.318: INFO: (15) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test (200; 3.989446ms) Mar 12 19:08:08.318: INFO: (15) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 4.132494ms) Mar 12 19:08:08.318: INFO: (15) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 4.187282ms) Mar 12 19:08:08.318: INFO: (15) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 4.192827ms) Mar 12 19:08:08.322: INFO: (16) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.592416ms) Mar 12 19:08:08.322: INFO: (16) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.842548ms) Mar 12 19:08:08.322: INFO: (16) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.951349ms) Mar 12 19:08:08.322: INFO: (16) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test<... (200; 4.826702ms) Mar 12 19:08:08.324: INFO: (16) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 5.465374ms) Mar 12 19:08:08.324: INFO: (16) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 5.627736ms) Mar 12 19:08:08.324: INFO: (16) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 5.683781ms) Mar 12 19:08:08.324: INFO: (16) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 5.862476ms) Mar 12 19:08:08.324: INFO: (16) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 5.850557ms) Mar 12 19:08:08.324: INFO: (16) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 5.957624ms) Mar 12 19:08:08.324: INFO: (16) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 6.026954ms) Mar 12 19:08:08.325: INFO: (16) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 6.622879ms) Mar 12 19:08:08.325: INFO: (16) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 6.824375ms) Mar 12 19:08:08.325: INFO: (16) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 6.805715ms) Mar 12 19:08:08.329: INFO: (17) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 3.148054ms) Mar 12 19:08:08.329: INFO: (17) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 3.502983ms) Mar 12 19:08:08.329: INFO: (17) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 3.584895ms) Mar 12 19:08:08.329: INFO: (17) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.51623ms) Mar 12 19:08:08.329: INFO: (17) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.558614ms) Mar 12 19:08:08.330: INFO: (17) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 4.196982ms) Mar 12 19:08:08.330: INFO: (17) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 4.338051ms) Mar 12 19:08:08.330: INFO: (17) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 4.319527ms) Mar 12 19:08:08.330: INFO: (17) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 4.346117ms) Mar 12 19:08:08.330: INFO: (17) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 4.420733ms) Mar 12 19:08:08.330: INFO: (17) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test<... (200; 2.927962ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.232911ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 3.196829ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 3.637235ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.560073ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.695556ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 3.991685ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname1/proxy/: tls baz (200; 4.076486ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 4.212276ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 4.112031ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 4.156599ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql/proxy/: test (200; 4.108841ms) Mar 12 19:08:08.334: INFO: (18) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:443/proxy/: test (200; 3.520854ms) Mar 12 19:08:08.339: INFO: (19) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:1080/proxy/: test<... (200; 3.891385ms) Mar 12 19:08:08.339: INFO: (19) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.519145ms) Mar 12 19:08:08.339: INFO: (19) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:1080/proxy/: ... (200; 3.486634ms) Mar 12 19:08:08.339: INFO: (19) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:462/proxy/: tls qux (200; 3.563587ms) Mar 12 19:08:08.339: INFO: (19) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:162/proxy/: bar (200; 3.759838ms) Mar 12 19:08:08.339: INFO: (19) /api/v1/namespaces/proxy-5843/pods/http:proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.49118ms) Mar 12 19:08:08.339: INFO: (19) /api/v1/namespaces/proxy-5843/pods/proxy-service-g56np-6tjql:160/proxy/: foo (200; 3.91575ms) Mar 12 19:08:08.339: INFO: (19) /api/v1/namespaces/proxy-5843/pods/https:proxy-service-g56np-6tjql:460/proxy/: tls baz (200; 3.876961ms) Mar 12 19:08:08.340: INFO: (19) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname2/proxy/: bar (200; 4.579695ms) Mar 12 19:08:08.340: INFO: (19) /api/v1/namespaces/proxy-5843/services/https:proxy-service-g56np:tlsportname2/proxy/: tls qux (200; 5.068745ms) Mar 12 19:08:08.340: INFO: (19) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname2/proxy/: bar (200; 5.122696ms) Mar 12 19:08:08.340: INFO: (19) /api/v1/namespaces/proxy-5843/services/proxy-service-g56np:portname1/proxy/: foo (200; 4.936636ms) Mar 12 19:08:08.340: INFO: (19) /api/v1/namespaces/proxy-5843/services/http:proxy-service-g56np:portname1/proxy/: foo (200; 4.925685ms) STEP: deleting ReplicationController proxy-service-g56np in namespace proxy-5843, will wait for the garbage collector to delete the pods Mar 12 19:08:08.395: INFO: Deleting ReplicationController proxy-service-g56np took: 3.285413ms Mar 12 19:08:08.495: INFO: Terminating ReplicationController proxy-service-g56np pods took: 100.173005ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:08:10.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5843" for this suite. • [SLOW TEST:10.620 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":31,"skipped":485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:08:10.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 12 19:08:10.698: INFO: Waiting up to 5m0s for pod "downward-api-bf267e7e-e313-4654-9b10-6a3d5d23c06d" in namespace "downward-api-3390" to be "success or failure" Mar 12 19:08:10.702: INFO: Pod "downward-api-bf267e7e-e313-4654-9b10-6a3d5d23c06d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.544917ms Mar 12 19:08:12.709: INFO: Pod "downward-api-bf267e7e-e313-4654-9b10-6a3d5d23c06d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010854165s STEP: Saw pod success Mar 12 19:08:12.709: INFO: Pod "downward-api-bf267e7e-e313-4654-9b10-6a3d5d23c06d" satisfied condition "success or failure" Mar 12 19:08:12.711: INFO: Trying to get logs from node jerma-worker pod downward-api-bf267e7e-e313-4654-9b10-6a3d5d23c06d container dapi-container: STEP: delete the pod Mar 12 19:08:12.777: INFO: Waiting for pod downward-api-bf267e7e-e313-4654-9b10-6a3d5d23c06d to disappear Mar 12 19:08:12.793: INFO: Pod downward-api-bf267e7e-e313-4654-9b10-6a3d5d23c06d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:08:12.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3390" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":508,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:08:12.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9635 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9635 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9635 Mar 12 19:08:12.897: INFO: Found 0 stateful pods, waiting for 1 Mar 12 19:08:22.901: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 12 19:08:22.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9635 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 19:08:23.135: INFO: stderr: "I0312 19:08:23.037825 506 log.go:172] (0xc000a41d90) (0xc0009c2960) Create stream\nI0312 19:08:23.037883 506 log.go:172] (0xc000a41d90) (0xc0009c2960) Stream added, broadcasting: 1\nI0312 19:08:23.041844 506 log.go:172] (0xc000a41d90) Reply frame received for 1\nI0312 19:08:23.041896 506 log.go:172] (0xc000a41d90) (0xc0005cc5a0) Create stream\nI0312 19:08:23.041917 506 log.go:172] (0xc000a41d90) (0xc0005cc5a0) Stream added, broadcasting: 3\nI0312 19:08:23.042921 506 log.go:172] (0xc000a41d90) Reply frame received for 3\nI0312 19:08:23.042964 506 log.go:172] (0xc000a41d90) (0xc000783360) Create stream\nI0312 19:08:23.042975 506 log.go:172] (0xc000a41d90) (0xc000783360) Stream added, broadcasting: 5\nI0312 19:08:23.045336 506 log.go:172] (0xc000a41d90) Reply frame received for 5\nI0312 19:08:23.109123 506 log.go:172] (0xc000a41d90) Data frame received for 5\nI0312 19:08:23.109142 506 log.go:172] (0xc000783360) (5) Data frame handling\nI0312 19:08:23.109154 506 log.go:172] (0xc000783360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 19:08:23.128726 506 log.go:172] (0xc000a41d90) Data frame received for 5\nI0312 19:08:23.128744 506 log.go:172] (0xc000783360) (5) Data frame handling\nI0312 19:08:23.128761 506 log.go:172] (0xc000a41d90) Data frame received for 3\nI0312 19:08:23.128776 506 log.go:172] (0xc0005cc5a0) (3) Data frame handling\nI0312 19:08:23.128788 506 log.go:172] (0xc0005cc5a0) (3) Data frame sent\nI0312 19:08:23.128795 506 log.go:172] (0xc000a41d90) Data frame received for 3\nI0312 19:08:23.128801 506 log.go:172] (0xc0005cc5a0) (3) Data frame handling\nI0312 19:08:23.130449 506 log.go:172] (0xc000a41d90) Data frame received for 1\nI0312 19:08:23.130466 506 log.go:172] (0xc0009c2960) (1) Data frame handling\nI0312 19:08:23.130478 506 log.go:172] (0xc0009c2960) (1) Data frame sent\nI0312 19:08:23.130840 506 log.go:172] (0xc000a41d90) (0xc0009c2960) Stream removed, broadcasting: 1\nI0312 19:08:23.131068 506 log.go:172] (0xc000a41d90) (0xc0009c2960) Stream removed, broadcasting: 1\nI0312 19:08:23.131082 506 log.go:172] (0xc000a41d90) (0xc0005cc5a0) Stream removed, broadcasting: 3\nI0312 19:08:23.131088 506 log.go:172] (0xc000a41d90) (0xc000783360) Stream removed, broadcasting: 5\n" Mar 12 19:08:23.135: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 19:08:23.135: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 19:08:23.161: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 12 19:08:33.165: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 19:08:33.165: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 19:08:33.196: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999587s Mar 12 19:08:34.198: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.977394743s Mar 12 19:08:35.201: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.974735599s Mar 12 19:08:36.233: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.972058755s Mar 12 19:08:37.245: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.939912243s Mar 12 19:08:38.248: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.927664456s Mar 12 19:08:39.253: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.924857097s Mar 12 19:08:40.257: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.920419677s Mar 12 19:08:41.261: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.916418724s Mar 12 19:08:42.264: INFO: Verifying statefulset ss doesn't scale past 1 for another 912.297067ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9635 Mar 12 19:08:43.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9635 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 19:08:43.441: INFO: stderr: "I0312 19:08:43.373981 526 log.go:172] (0xc00055edc0) (0xc0008ec000) Create stream\nI0312 19:08:43.374037 526 log.go:172] (0xc00055edc0) (0xc0008ec000) Stream added, broadcasting: 1\nI0312 19:08:43.376455 526 log.go:172] (0xc00055edc0) Reply frame received for 1\nI0312 19:08:43.376479 526 log.go:172] (0xc00055edc0) (0xc0006c1c20) Create stream\nI0312 19:08:43.376486 526 log.go:172] (0xc00055edc0) (0xc0006c1c20) Stream added, broadcasting: 3\nI0312 19:08:43.377098 526 log.go:172] (0xc00055edc0) Reply frame received for 3\nI0312 19:08:43.377119 526 log.go:172] (0xc00055edc0) (0xc0008ec0a0) Create stream\nI0312 19:08:43.377126 526 log.go:172] (0xc00055edc0) (0xc0008ec0a0) Stream added, broadcasting: 5\nI0312 19:08:43.377616 526 log.go:172] (0xc00055edc0) Reply frame received for 5\nI0312 19:08:43.436437 526 log.go:172] (0xc00055edc0) Data frame received for 3\nI0312 19:08:43.436463 526 log.go:172] (0xc0006c1c20) (3) Data frame handling\nI0312 19:08:43.436479 526 log.go:172] (0xc0006c1c20) (3) Data frame sent\nI0312 19:08:43.436489 526 log.go:172] (0xc00055edc0) Data frame received for 3\nI0312 19:08:43.436502 526 log.go:172] (0xc00055edc0) Data frame received for 5\nI0312 19:08:43.436514 526 log.go:172] (0xc0008ec0a0) (5) Data frame handling\nI0312 19:08:43.436522 526 log.go:172] (0xc0008ec0a0) (5) Data frame sent\nI0312 19:08:43.436527 526 log.go:172] (0xc00055edc0) Data frame received for 5\nI0312 19:08:43.436539 526 log.go:172] (0xc0008ec0a0) (5) Data frame handling\nI0312 19:08:43.436553 526 log.go:172] (0xc0006c1c20) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 19:08:43.437679 526 log.go:172] (0xc00055edc0) Data frame received for 1\nI0312 19:08:43.437694 526 log.go:172] (0xc0008ec000) (1) Data frame handling\nI0312 19:08:43.437703 526 log.go:172] (0xc0008ec000) (1) Data frame sent\nI0312 19:08:43.437713 526 log.go:172] (0xc00055edc0) (0xc0008ec000) Stream removed, broadcasting: 1\nI0312 19:08:43.437724 526 log.go:172] (0xc00055edc0) Go away received\nI0312 19:08:43.438003 526 log.go:172] (0xc00055edc0) (0xc0008ec000) Stream removed, broadcasting: 1\nI0312 19:08:43.438018 526 log.go:172] (0xc00055edc0) (0xc0006c1c20) Stream removed, broadcasting: 3\nI0312 19:08:43.438024 526 log.go:172] (0xc00055edc0) (0xc0008ec0a0) Stream removed, broadcasting: 5\n" Mar 12 19:08:43.441: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 19:08:43.441: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 19:08:43.455: INFO: Found 1 stateful pods, waiting for 3 Mar 12 19:08:53.459: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 19:08:53.459: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 19:08:53.459: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 12 19:08:53.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9635 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 19:08:53.652: INFO: stderr: "I0312 19:08:53.578736 546 log.go:172] (0xc000a30160) (0xc00097e000) Create stream\nI0312 19:08:53.578783 546 log.go:172] (0xc000a30160) (0xc00097e000) Stream added, broadcasting: 1\nI0312 19:08:53.581806 546 log.go:172] (0xc000a30160) Reply frame received for 1\nI0312 19:08:53.581845 546 log.go:172] (0xc000a30160) (0xc00095c0a0) Create stream\nI0312 19:08:53.581885 546 log.go:172] (0xc000a30160) (0xc00095c0a0) Stream added, broadcasting: 3\nI0312 19:08:53.582734 546 log.go:172] (0xc000a30160) Reply frame received for 3\nI0312 19:08:53.582771 546 log.go:172] (0xc000a30160) (0xc000a68000) Create stream\nI0312 19:08:53.582784 546 log.go:172] (0xc000a30160) (0xc000a68000) Stream added, broadcasting: 5\nI0312 19:08:53.583497 546 log.go:172] (0xc000a30160) Reply frame received for 5\nI0312 19:08:53.647452 546 log.go:172] (0xc000a30160) Data frame received for 5\nI0312 19:08:53.647478 546 log.go:172] (0xc000a68000) (5) Data frame handling\nI0312 19:08:53.647492 546 log.go:172] (0xc000a68000) (5) Data frame sent\nI0312 19:08:53.647501 546 log.go:172] (0xc000a30160) Data frame received for 5\nI0312 19:08:53.647506 546 log.go:172] (0xc000a68000) (5) Data frame handling\nI0312 19:08:53.647515 546 log.go:172] (0xc000a30160) Data frame received for 3\nI0312 19:08:53.647522 546 log.go:172] (0xc00095c0a0) (3) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 19:08:53.647536 546 log.go:172] (0xc00095c0a0) (3) Data frame sent\nI0312 19:08:53.647544 546 log.go:172] (0xc000a30160) Data frame received for 3\nI0312 19:08:53.647550 546 log.go:172] (0xc00095c0a0) (3) Data frame handling\nI0312 19:08:53.648469 546 log.go:172] (0xc000a30160) Data frame received for 1\nI0312 19:08:53.648484 546 log.go:172] (0xc00097e000) (1) Data frame handling\nI0312 19:08:53.648493 546 log.go:172] (0xc00097e000) (1) Data frame sent\nI0312 19:08:53.648503 546 log.go:172] (0xc000a30160) (0xc00097e000) Stream removed, broadcasting: 1\nI0312 19:08:53.648514 546 log.go:172] (0xc000a30160) Go away received\nI0312 19:08:53.648782 546 log.go:172] (0xc000a30160) (0xc00097e000) Stream removed, broadcasting: 1\nI0312 19:08:53.648792 546 log.go:172] (0xc000a30160) (0xc00095c0a0) Stream removed, broadcasting: 3\nI0312 19:08:53.648798 546 log.go:172] (0xc000a30160) (0xc000a68000) Stream removed, broadcasting: 5\n" Mar 12 19:08:53.652: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 19:08:53.652: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 19:08:53.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9635 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 19:08:53.838: INFO: stderr: "I0312 19:08:53.741441 566 log.go:172] (0xc000028000) (0xc000696820) Create stream\nI0312 19:08:53.741476 566 log.go:172] (0xc000028000) (0xc000696820) Stream added, broadcasting: 1\nI0312 19:08:53.742883 566 log.go:172] (0xc000028000) Reply frame received for 1\nI0312 19:08:53.742904 566 log.go:172] (0xc000028000) (0xc0004375e0) Create stream\nI0312 19:08:53.742911 566 log.go:172] (0xc000028000) (0xc0004375e0) Stream added, broadcasting: 3\nI0312 19:08:53.743393 566 log.go:172] (0xc000028000) Reply frame received for 3\nI0312 19:08:53.743412 566 log.go:172] (0xc000028000) (0xc000437680) Create stream\nI0312 19:08:53.743421 566 log.go:172] (0xc000028000) (0xc000437680) Stream added, broadcasting: 5\nI0312 19:08:53.743895 566 log.go:172] (0xc000028000) Reply frame received for 5\nI0312 19:08:53.803703 566 log.go:172] (0xc000028000) Data frame received for 5\nI0312 19:08:53.803733 566 log.go:172] (0xc000437680) (5) Data frame handling\nI0312 19:08:53.803751 566 log.go:172] (0xc000437680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 19:08:53.832680 566 log.go:172] (0xc000028000) Data frame received for 3\nI0312 19:08:53.832702 566 log.go:172] (0xc0004375e0) (3) Data frame handling\nI0312 19:08:53.832709 566 log.go:172] (0xc0004375e0) (3) Data frame sent\nI0312 19:08:53.832803 566 log.go:172] (0xc000028000) Data frame received for 3\nI0312 19:08:53.832832 566 log.go:172] (0xc000028000) Data frame received for 5\nI0312 19:08:53.832854 566 log.go:172] (0xc000437680) (5) Data frame handling\nI0312 19:08:53.832877 566 log.go:172] (0xc0004375e0) (3) Data frame handling\nI0312 19:08:53.834302 566 log.go:172] (0xc000028000) Data frame received for 1\nI0312 19:08:53.834316 566 log.go:172] (0xc000696820) (1) Data frame handling\nI0312 19:08:53.834323 566 log.go:172] (0xc000696820) (1) Data frame sent\nI0312 19:08:53.834332 566 log.go:172] (0xc000028000) (0xc000696820) Stream removed, broadcasting: 1\nI0312 19:08:53.834347 566 log.go:172] (0xc000028000) Go away received\nI0312 19:08:53.834657 566 log.go:172] (0xc000028000) (0xc000696820) Stream removed, broadcasting: 1\nI0312 19:08:53.834672 566 log.go:172] (0xc000028000) (0xc0004375e0) Stream removed, broadcasting: 3\nI0312 19:08:53.834678 566 log.go:172] (0xc000028000) (0xc000437680) Stream removed, broadcasting: 5\n" Mar 12 19:08:53.838: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 19:08:53.838: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 19:08:53.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9635 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 19:08:54.033: INFO: stderr: "I0312 19:08:53.948811 586 log.go:172] (0xc0000f5290) (0xc0006bde00) Create stream\nI0312 19:08:53.948844 586 log.go:172] (0xc0000f5290) (0xc0006bde00) Stream added, broadcasting: 1\nI0312 19:08:53.950551 586 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0312 19:08:53.950570 586 log.go:172] (0xc0000f5290) (0xc0006bdea0) Create stream\nI0312 19:08:53.950575 586 log.go:172] (0xc0000f5290) (0xc0006bdea0) Stream added, broadcasting: 3\nI0312 19:08:53.951045 586 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0312 19:08:53.951062 586 log.go:172] (0xc0000f5290) (0xc000af6000) Create stream\nI0312 19:08:53.951068 586 log.go:172] (0xc0000f5290) (0xc000af6000) Stream added, broadcasting: 5\nI0312 19:08:53.951598 586 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0312 19:08:54.008012 586 log.go:172] (0xc0000f5290) Data frame received for 5\nI0312 19:08:54.008036 586 log.go:172] (0xc000af6000) (5) Data frame handling\nI0312 19:08:54.008050 586 log.go:172] (0xc000af6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 19:08:54.027297 586 log.go:172] (0xc0000f5290) Data frame received for 3\nI0312 19:08:54.027324 586 log.go:172] (0xc0006bdea0) (3) Data frame handling\nI0312 19:08:54.027345 586 log.go:172] (0xc0006bdea0) (3) Data frame sent\nI0312 19:08:54.027442 586 log.go:172] (0xc0000f5290) Data frame received for 3\nI0312 19:08:54.027459 586 log.go:172] (0xc0006bdea0) (3) Data frame handling\nI0312 19:08:54.027538 586 log.go:172] (0xc0000f5290) Data frame received for 5\nI0312 19:08:54.027554 586 log.go:172] (0xc000af6000) (5) Data frame handling\nI0312 19:08:54.029517 586 log.go:172] (0xc0000f5290) Data frame received for 1\nI0312 19:08:54.029535 586 log.go:172] (0xc0006bde00) (1) Data frame handling\nI0312 19:08:54.029546 586 log.go:172] (0xc0006bde00) (1) Data frame sent\nI0312 19:08:54.029563 586 log.go:172] (0xc0000f5290) (0xc0006bde00) Stream removed, broadcasting: 1\nI0312 19:08:54.029581 586 log.go:172] (0xc0000f5290) Go away received\nI0312 19:08:54.029936 586 log.go:172] (0xc0000f5290) (0xc0006bde00) Stream removed, broadcasting: 1\nI0312 19:08:54.029953 586 log.go:172] (0xc0000f5290) (0xc0006bdea0) Stream removed, broadcasting: 3\nI0312 19:08:54.029960 586 log.go:172] (0xc0000f5290) (0xc000af6000) Stream removed, broadcasting: 5\n" Mar 12 19:08:54.033: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 19:08:54.033: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 19:08:54.033: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 19:08:54.036: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 12 19:09:04.042: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 19:09:04.042: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 12 19:09:04.042: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 12 19:09:04.058: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999209s Mar 12 19:09:05.063: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991030944s Mar 12 19:09:06.067: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986632185s Mar 12 19:09:07.072: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982320942s Mar 12 19:09:08.076: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977669927s Mar 12 19:09:09.081: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973530283s Mar 12 19:09:10.085: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.9687373s Mar 12 19:09:11.089: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964536599s Mar 12 19:09:12.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.960123828s Mar 12 19:09:13.099: INFO: Verifying statefulset ss doesn't scale past 3 for another 954.983677ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9635 Mar 12 19:09:14.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9635 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 19:09:14.344: INFO: stderr: "I0312 19:09:14.281600 606 log.go:172] (0xc000991340) (0xc000b5a820) Create stream\nI0312 19:09:14.281649 606 log.go:172] (0xc000991340) (0xc000b5a820) Stream added, broadcasting: 1\nI0312 19:09:14.287608 606 log.go:172] (0xc000991340) Reply frame received for 1\nI0312 19:09:14.287664 606 log.go:172] (0xc000991340) (0xc00070fb80) Create stream\nI0312 19:09:14.287681 606 log.go:172] (0xc000991340) (0xc00070fb80) Stream added, broadcasting: 3\nI0312 19:09:14.289848 606 log.go:172] (0xc000991340) Reply frame received for 3\nI0312 19:09:14.289870 606 log.go:172] (0xc000991340) (0xc0006ca780) Create stream\nI0312 19:09:14.289877 606 log.go:172] (0xc000991340) (0xc0006ca780) Stream added, broadcasting: 5\nI0312 19:09:14.290721 606 log.go:172] (0xc000991340) Reply frame received for 5\nI0312 19:09:14.339782 606 log.go:172] (0xc000991340) Data frame received for 5\nI0312 19:09:14.339815 606 log.go:172] (0xc0006ca780) (5) Data frame handling\nI0312 19:09:14.339824 606 log.go:172] (0xc0006ca780) (5) Data frame sent\nI0312 19:09:14.339829 606 log.go:172] (0xc000991340) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 19:09:14.339862 606 log.go:172] (0xc000991340) Data frame received for 3\nI0312 19:09:14.339914 606 log.go:172] (0xc00070fb80) (3) Data frame handling\nI0312 19:09:14.339932 606 log.go:172] (0xc00070fb80) (3) Data frame sent\nI0312 19:09:14.339959 606 log.go:172] (0xc000991340) Data frame received for 3\nI0312 19:09:14.339970 606 log.go:172] (0xc0006ca780) (5) Data frame handling\nI0312 19:09:14.339992 606 log.go:172] (0xc00070fb80) (3) Data frame handling\nI0312 19:09:14.340686 606 log.go:172] (0xc000991340) Data frame received for 1\nI0312 19:09:14.340698 606 log.go:172] (0xc000b5a820) (1) Data frame handling\nI0312 19:09:14.340703 606 log.go:172] (0xc000b5a820) (1) Data frame sent\nI0312 19:09:14.340710 606 log.go:172] (0xc000991340) (0xc000b5a820) Stream removed, broadcasting: 1\nI0312 19:09:14.340752 606 log.go:172] (0xc000991340) Go away received\nI0312 19:09:14.340940 606 log.go:172] (0xc000991340) (0xc000b5a820) Stream removed, broadcasting: 1\nI0312 19:09:14.340950 606 log.go:172] (0xc000991340) (0xc00070fb80) Stream removed, broadcasting: 3\nI0312 19:09:14.340955 606 log.go:172] (0xc000991340) (0xc0006ca780) Stream removed, broadcasting: 5\n" Mar 12 19:09:14.344: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 19:09:14.344: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 19:09:14.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9635 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 19:09:14.507: INFO: stderr: "I0312 19:09:14.434282 626 log.go:172] (0xc000a1e000) (0xc000616500) Create stream\nI0312 19:09:14.434320 626 log.go:172] (0xc000a1e000) (0xc000616500) Stream added, broadcasting: 1\nI0312 19:09:14.435961 626 log.go:172] (0xc000a1e000) Reply frame received for 1\nI0312 19:09:14.435981 626 log.go:172] (0xc000a1e000) (0xc0002372c0) Create stream\nI0312 19:09:14.435988 626 log.go:172] (0xc000a1e000) (0xc0002372c0) Stream added, broadcasting: 3\nI0312 19:09:14.436523 626 log.go:172] (0xc000a1e000) Reply frame received for 3\nI0312 19:09:14.436539 626 log.go:172] (0xc000a1e000) (0xc000237360) Create stream\nI0312 19:09:14.436544 626 log.go:172] (0xc000a1e000) (0xc000237360) Stream added, broadcasting: 5\nI0312 19:09:14.437057 626 log.go:172] (0xc000a1e000) Reply frame received for 5\nI0312 19:09:14.502789 626 log.go:172] (0xc000a1e000) Data frame received for 3\nI0312 19:09:14.502811 626 log.go:172] (0xc000a1e000) Data frame received for 5\nI0312 19:09:14.502829 626 log.go:172] (0xc000237360) (5) Data frame handling\nI0312 19:09:14.502843 626 log.go:172] (0xc000237360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 19:09:14.502850 626 log.go:172] (0xc000a1e000) Data frame received for 5\nI0312 19:09:14.502863 626 log.go:172] (0xc000237360) (5) Data frame handling\nI0312 19:09:14.502876 626 log.go:172] (0xc0002372c0) (3) Data frame handling\nI0312 19:09:14.502882 626 log.go:172] (0xc0002372c0) (3) Data frame sent\nI0312 19:09:14.502888 626 log.go:172] (0xc000a1e000) Data frame received for 3\nI0312 19:09:14.502894 626 log.go:172] (0xc0002372c0) (3) Data frame handling\nI0312 19:09:14.503467 626 log.go:172] (0xc000a1e000) Data frame received for 1\nI0312 19:09:14.503481 626 log.go:172] (0xc000616500) (1) Data frame handling\nI0312 19:09:14.503489 626 log.go:172] (0xc000616500) (1) Data frame sent\nI0312 19:09:14.503499 626 log.go:172] (0xc000a1e000) (0xc000616500) Stream removed, broadcasting: 1\nI0312 19:09:14.503512 626 log.go:172] (0xc000a1e000) Go away received\nI0312 19:09:14.503764 626 log.go:172] (0xc000a1e000) (0xc000616500) Stream removed, broadcasting: 1\nI0312 19:09:14.503774 626 log.go:172] (0xc000a1e000) (0xc0002372c0) Stream removed, broadcasting: 3\nI0312 19:09:14.503778 626 log.go:172] (0xc000a1e000) (0xc000237360) Stream removed, broadcasting: 5\n" Mar 12 19:09:14.507: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 19:09:14.507: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 19:09:14.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 19:09:14.648: INFO: stderr: "I0312 19:09:14.589896 646 log.go:172] (0xc000a2de40) (0xc0009f86e0) Create stream\nI0312 19:09:14.589928 646 log.go:172] (0xc000a2de40) (0xc0009f86e0) Stream added, broadcasting: 1\nI0312 19:09:14.592647 646 log.go:172] (0xc000a2de40) Reply frame received for 1\nI0312 19:09:14.592676 646 log.go:172] (0xc000a2de40) (0xc000620640) Create stream\nI0312 19:09:14.592685 646 log.go:172] (0xc000a2de40) (0xc000620640) Stream added, broadcasting: 3\nI0312 19:09:14.593266 646 log.go:172] (0xc000a2de40) Reply frame received for 3\nI0312 19:09:14.593287 646 log.go:172] (0xc000a2de40) (0xc0005e75e0) Create stream\nI0312 19:09:14.593297 646 log.go:172] (0xc000a2de40) (0xc0005e75e0) Stream added, broadcasting: 5\nI0312 19:09:14.593776 646 log.go:172] (0xc000a2de40) Reply frame received for 5\nI0312 19:09:14.643474 646 log.go:172] (0xc000a2de40) Data frame received for 5\nI0312 19:09:14.643502 646 log.go:172] (0xc0005e75e0) (5) Data frame handling\nI0312 19:09:14.643516 646 log.go:172] (0xc0005e75e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 19:09:14.643531 646 log.go:172] (0xc000a2de40) Data frame received for 3\nI0312 19:09:14.643540 646 log.go:172] (0xc000620640) (3) Data frame handling\nI0312 19:09:14.643550 646 log.go:172] (0xc000620640) (3) Data frame sent\nI0312 19:09:14.643564 646 log.go:172] (0xc000a2de40) Data frame received for 3\nI0312 19:09:14.643574 646 log.go:172] (0xc000620640) (3) Data frame handling\nI0312 19:09:14.643587 646 log.go:172] (0xc000a2de40) Data frame received for 5\nI0312 19:09:14.643596 646 log.go:172] (0xc0005e75e0) (5) Data frame handling\nI0312 19:09:14.644758 646 log.go:172] (0xc000a2de40) Data frame received for 1\nI0312 19:09:14.644788 646 log.go:172] (0xc0009f86e0) (1) Data frame handling\nI0312 19:09:14.644801 646 log.go:172] (0xc0009f86e0) (1) Data frame sent\nI0312 19:09:14.644814 646 log.go:172] (0xc000a2de40) (0xc0009f86e0) Stream removed, broadcasting: 1\nI0312 19:09:14.644828 646 log.go:172] (0xc000a2de40) Go away received\nI0312 19:09:14.645149 646 log.go:172] (0xc000a2de40) (0xc0009f86e0) Stream removed, broadcasting: 1\nI0312 19:09:14.645164 646 log.go:172] (0xc000a2de40) (0xc000620640) Stream removed, broadcasting: 3\nI0312 19:09:14.645171 646 log.go:172] (0xc000a2de40) (0xc0005e75e0) Stream removed, broadcasting: 5\n" Mar 12 19:09:14.648: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 19:09:14.648: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 19:09:14.648: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 19:09:24.660: INFO: Deleting all statefulset in ns statefulset-9635 Mar 12 19:09:24.663: INFO: Scaling statefulset ss to 0 Mar 12 19:09:24.670: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 19:09:24.671: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:09:24.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9635" for this suite. • [SLOW TEST:71.902 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":33,"skipped":510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:09:24.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-cbd3ccc1-559f-4820-9b93-a83bc7d6c329 STEP: Creating a pod to test consume secrets Mar 12 19:09:24.748: INFO: Waiting up to 5m0s for pod "pod-secrets-38774f98-68f6-4d56-835f-b716f2775045" in namespace "secrets-288" to be "success or failure" Mar 12 19:09:24.752: INFO: Pod "pod-secrets-38774f98-68f6-4d56-835f-b716f2775045": Phase="Pending", Reason="", readiness=false. Elapsed: 3.809071ms Mar 12 19:09:26.756: INFO: Pod "pod-secrets-38774f98-68f6-4d56-835f-b716f2775045": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00733564s STEP: Saw pod success Mar 12 19:09:26.756: INFO: Pod "pod-secrets-38774f98-68f6-4d56-835f-b716f2775045" satisfied condition "success or failure" Mar 12 19:09:26.759: INFO: Trying to get logs from node jerma-worker pod pod-secrets-38774f98-68f6-4d56-835f-b716f2775045 container secret-volume-test: STEP: delete the pod Mar 12 19:09:26.796: INFO: Waiting for pod pod-secrets-38774f98-68f6-4d56-835f-b716f2775045 to disappear Mar 12 19:09:26.806: INFO: Pod pod-secrets-38774f98-68f6-4d56-835f-b716f2775045 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:09:26.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-288" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":551,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:09:26.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:09:43.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1486" for this suite. • [SLOW TEST:17.151 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":35,"skipped":595,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:09:43.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 12 19:09:44.042: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8750 /api/v1/namespaces/watch-8750/configmaps/e2e-watch-test-resource-version 8d1ff796-b318-42e1-b40a-7ce04126093a 1196274 0 2020-03-12 19:09:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 12 19:09:44.042: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8750 /api/v1/namespaces/watch-8750/configmaps/e2e-watch-test-resource-version 8d1ff796-b318-42e1-b40a-7ce04126093a 1196275 0 2020-03-12 19:09:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:09:44.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8750" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":36,"skipped":600,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:09:44.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 12 19:09:44.155: INFO: Waiting up to 5m0s for pod "downward-api-33d06377-d539-4890-9981-d13d963b1a80" in namespace "downward-api-1939" to be "success or failure" Mar 12 19:09:44.161: INFO: Pod "downward-api-33d06377-d539-4890-9981-d13d963b1a80": Phase="Pending", Reason="", readiness=false. Elapsed: 5.20604ms Mar 12 19:09:46.163: INFO: Pod "downward-api-33d06377-d539-4890-9981-d13d963b1a80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007951358s STEP: Saw pod success Mar 12 19:09:46.163: INFO: Pod "downward-api-33d06377-d539-4890-9981-d13d963b1a80" satisfied condition "success or failure" Mar 12 19:09:46.167: INFO: Trying to get logs from node jerma-worker2 pod downward-api-33d06377-d539-4890-9981-d13d963b1a80 container dapi-container: STEP: delete the pod Mar 12 19:09:46.211: INFO: Waiting for pod downward-api-33d06377-d539-4890-9981-d13d963b1a80 to disappear Mar 12 19:09:46.214: INFO: Pod downward-api-33d06377-d539-4890-9981-d13d963b1a80 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:09:46.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1939" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":607,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:09:46.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 12 19:09:46.850: INFO: Pod name wrapped-volume-race-a9f6d2f4-87b9-4451-90e8-3eaf11c6a417: Found 0 pods out of 5 Mar 12 19:09:51.856: INFO: Pod name wrapped-volume-race-a9f6d2f4-87b9-4451-90e8-3eaf11c6a417: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a9f6d2f4-87b9-4451-90e8-3eaf11c6a417 in namespace emptydir-wrapper-5022, will wait for the garbage collector to delete the pods Mar 12 19:10:01.981: INFO: Deleting ReplicationController wrapped-volume-race-a9f6d2f4-87b9-4451-90e8-3eaf11c6a417 took: 5.700243ms Mar 12 19:10:02.281: INFO: Terminating ReplicationController wrapped-volume-race-a9f6d2f4-87b9-4451-90e8-3eaf11c6a417 pods took: 300.257623ms STEP: Creating RC which spawns configmap-volume pods Mar 12 19:10:08.615: INFO: Pod name wrapped-volume-race-ad1ea2c9-147b-403c-9531-e02d8a29fb29: Found 0 pods out of 5 Mar 12 19:10:13.622: INFO: Pod name wrapped-volume-race-ad1ea2c9-147b-403c-9531-e02d8a29fb29: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ad1ea2c9-147b-403c-9531-e02d8a29fb29 in namespace emptydir-wrapper-5022, will wait for the garbage collector to delete the pods Mar 12 19:10:25.705: INFO: Deleting ReplicationController wrapped-volume-race-ad1ea2c9-147b-403c-9531-e02d8a29fb29 took: 4.969248ms Mar 12 19:10:25.805: INFO: Terminating ReplicationController wrapped-volume-race-ad1ea2c9-147b-403c-9531-e02d8a29fb29 pods took: 100.191094ms STEP: Creating RC which spawns configmap-volume pods Mar 12 19:10:31.049: INFO: Pod name wrapped-volume-race-4126c0b5-0a5b-418b-94d8-1d18d50b53d9: Found 0 pods out of 5 Mar 12 19:10:36.055: INFO: Pod name wrapped-volume-race-4126c0b5-0a5b-418b-94d8-1d18d50b53d9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4126c0b5-0a5b-418b-94d8-1d18d50b53d9 in namespace emptydir-wrapper-5022, will wait for the garbage collector to delete the pods Mar 12 19:10:48.156: INFO: Deleting ReplicationController wrapped-volume-race-4126c0b5-0a5b-418b-94d8-1d18d50b53d9 took: 24.489059ms Mar 12 19:10:48.456: INFO: Terminating ReplicationController wrapped-volume-race-4126c0b5-0a5b-418b-94d8-1d18d50b53d9 pods took: 300.291253ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:10:56.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5022" for this suite. • [SLOW TEST:70.737 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":38,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:10:56.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:10:57.053: INFO: Creating ReplicaSet my-hostname-basic-90fc91bd-9ac1-43f5-94f4-94ac174066c7 Mar 12 19:10:57.076: INFO: Pod name my-hostname-basic-90fc91bd-9ac1-43f5-94f4-94ac174066c7: Found 0 pods out of 1 Mar 12 19:11:02.079: INFO: Pod name my-hostname-basic-90fc91bd-9ac1-43f5-94f4-94ac174066c7: Found 1 pods out of 1 Mar 12 19:11:02.079: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-90fc91bd-9ac1-43f5-94f4-94ac174066c7" is running Mar 12 19:11:02.082: INFO: Pod "my-hostname-basic-90fc91bd-9ac1-43f5-94f4-94ac174066c7-mjws9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 19:10:57 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 19:10:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 19:10:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 19:10:57 +0000 UTC Reason: Message:}]) Mar 12 19:11:02.082: INFO: Trying to dial the pod Mar 12 19:11:07.110: INFO: Controller my-hostname-basic-90fc91bd-9ac1-43f5-94f4-94ac174066c7: Got expected result from replica 1 [my-hostname-basic-90fc91bd-9ac1-43f5-94f4-94ac174066c7-mjws9]: "my-hostname-basic-90fc91bd-9ac1-43f5-94f4-94ac174066c7-mjws9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:11:07.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5282" for this suite. • [SLOW TEST:10.158 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":39,"skipped":647,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:11:07.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3127 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3127 I0312 19:11:07.274299 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3127, replica count: 2 I0312 19:11:10.324654 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 19:11:10.324: INFO: Creating new exec pod Mar 12 19:11:13.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3127 execpod48mwj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 12 19:11:13.509: INFO: stderr: "I0312 19:11:13.440956 668 log.go:172] (0xc0004f4630) (0xc000950140) Create stream\nI0312 19:11:13.441009 668 log.go:172] (0xc0004f4630) (0xc000950140) Stream added, broadcasting: 1\nI0312 19:11:13.442809 668 log.go:172] (0xc0004f4630) Reply frame received for 1\nI0312 19:11:13.442844 668 log.go:172] (0xc0004f4630) (0xc0009e8000) Create stream\nI0312 19:11:13.442853 668 log.go:172] (0xc0004f4630) (0xc0009e8000) Stream added, broadcasting: 3\nI0312 19:11:13.443720 668 log.go:172] (0xc0004f4630) Reply frame received for 3\nI0312 19:11:13.443742 668 log.go:172] (0xc0004f4630) (0xc0009e80a0) Create stream\nI0312 19:11:13.443749 668 log.go:172] (0xc0004f4630) (0xc0009e80a0) Stream added, broadcasting: 5\nI0312 19:11:13.444535 668 log.go:172] (0xc0004f4630) Reply frame received for 5\nI0312 19:11:13.503409 668 log.go:172] (0xc0004f4630) Data frame received for 5\nI0312 19:11:13.503443 668 log.go:172] (0xc0009e80a0) (5) Data frame handling\nI0312 19:11:13.503458 668 log.go:172] (0xc0009e80a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0312 19:11:13.504175 668 log.go:172] (0xc0004f4630) Data frame received for 3\nI0312 19:11:13.504214 668 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0312 19:11:13.504232 668 log.go:172] (0xc0004f4630) Data frame received for 5\nI0312 19:11:13.504238 668 log.go:172] (0xc0009e80a0) (5) Data frame handling\nI0312 19:11:13.504246 668 log.go:172] (0xc0009e80a0) (5) Data frame sent\nI0312 19:11:13.504255 668 log.go:172] (0xc0004f4630) Data frame received for 5\nI0312 19:11:13.504259 668 log.go:172] (0xc0009e80a0) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0312 19:11:13.505433 668 log.go:172] (0xc0004f4630) Data frame received for 1\nI0312 19:11:13.505445 668 log.go:172] (0xc000950140) (1) Data frame handling\nI0312 19:11:13.505451 668 log.go:172] (0xc000950140) (1) Data frame sent\nI0312 19:11:13.505461 668 log.go:172] (0xc0004f4630) (0xc000950140) Stream removed, broadcasting: 1\nI0312 19:11:13.505473 668 log.go:172] (0xc0004f4630) Go away received\nI0312 19:11:13.505865 668 log.go:172] (0xc0004f4630) (0xc000950140) Stream removed, broadcasting: 1\nI0312 19:11:13.505886 668 log.go:172] (0xc0004f4630) (0xc0009e8000) Stream removed, broadcasting: 3\nI0312 19:11:13.505895 668 log.go:172] (0xc0004f4630) (0xc0009e80a0) Stream removed, broadcasting: 5\n" Mar 12 19:11:13.509: INFO: stdout: "" Mar 12 19:11:13.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3127 execpod48mwj -- /bin/sh -x -c nc -zv -t -w 2 10.108.99.227 80' Mar 12 19:11:13.681: INFO: stderr: "I0312 19:11:13.615685 690 log.go:172] (0xc000974000) (0xc00067a6e0) Create stream\nI0312 19:11:13.615740 690 log.go:172] (0xc000974000) (0xc00067a6e0) Stream added, broadcasting: 1\nI0312 19:11:13.617746 690 log.go:172] (0xc000974000) Reply frame received for 1\nI0312 19:11:13.617775 690 log.go:172] (0xc000974000) (0xc0005494a0) Create stream\nI0312 19:11:13.617784 690 log.go:172] (0xc000974000) (0xc0005494a0) Stream added, broadcasting: 3\nI0312 19:11:13.618503 690 log.go:172] (0xc000974000) Reply frame received for 3\nI0312 19:11:13.618529 690 log.go:172] (0xc000974000) (0xc0007c8000) Create stream\nI0312 19:11:13.618537 690 log.go:172] (0xc000974000) (0xc0007c8000) Stream added, broadcasting: 5\nI0312 19:11:13.619316 690 log.go:172] (0xc000974000) Reply frame received for 5\nI0312 19:11:13.676397 690 log.go:172] (0xc000974000) Data frame received for 3\nI0312 19:11:13.676504 690 log.go:172] (0xc0005494a0) (3) Data frame handling\nI0312 19:11:13.676545 690 log.go:172] (0xc000974000) Data frame received for 5\nI0312 19:11:13.676567 690 log.go:172] (0xc0007c8000) (5) Data frame handling\nI0312 19:11:13.676583 690 log.go:172] (0xc0007c8000) (5) Data frame sent\nI0312 19:11:13.676590 690 log.go:172] (0xc000974000) Data frame received for 5\nI0312 19:11:13.676595 690 log.go:172] (0xc0007c8000) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.99.227 80\nConnection to 10.108.99.227 80 port [tcp/http] succeeded!\nI0312 19:11:13.677662 690 log.go:172] (0xc000974000) Data frame received for 1\nI0312 19:11:13.677677 690 log.go:172] (0xc00067a6e0) (1) Data frame handling\nI0312 19:11:13.677690 690 log.go:172] (0xc00067a6e0) (1) Data frame sent\nI0312 19:11:13.677700 690 log.go:172] (0xc000974000) (0xc00067a6e0) Stream removed, broadcasting: 1\nI0312 19:11:13.677855 690 log.go:172] (0xc000974000) Go away received\nI0312 19:11:13.677993 690 log.go:172] (0xc000974000) (0xc00067a6e0) Stream removed, broadcasting: 1\nI0312 19:11:13.678011 690 log.go:172] (0xc000974000) (0xc0005494a0) Stream removed, broadcasting: 3\nI0312 19:11:13.678021 690 log.go:172] (0xc000974000) (0xc0007c8000) Stream removed, broadcasting: 5\n" Mar 12 19:11:13.681: INFO: stdout: "" Mar 12 19:11:13.681: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:11:13.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3127" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:6.597 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":40,"skipped":659,"failed":0} [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:11:13.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:11:44.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8287" for this suite. STEP: Destroying namespace "nsdeletetest-5067" for this suite. Mar 12 19:11:44.935: INFO: Namespace nsdeletetest-5067 was already deleted STEP: Destroying namespace "nsdeletetest-6703" for this suite. • [SLOW TEST:31.223 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":41,"skipped":659,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:11:44.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9493/configmap-test-f2e71140-7579-4fd0-b651-eade7c3a1c91 STEP: Creating a pod to test consume configMaps Mar 12 19:11:44.996: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3679f8b-6dde-408c-ba39-481bc4749e45" in namespace "configmap-9493" to be "success or failure" Mar 12 19:11:45.019: INFO: Pod "pod-configmaps-c3679f8b-6dde-408c-ba39-481bc4749e45": Phase="Pending", Reason="", readiness=false. Elapsed: 22.976319ms Mar 12 19:11:47.023: INFO: Pod "pod-configmaps-c3679f8b-6dde-408c-ba39-481bc4749e45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026778206s STEP: Saw pod success Mar 12 19:11:47.023: INFO: Pod "pod-configmaps-c3679f8b-6dde-408c-ba39-481bc4749e45" satisfied condition "success or failure" Mar 12 19:11:47.044: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-c3679f8b-6dde-408c-ba39-481bc4749e45 container env-test: STEP: delete the pod Mar 12 19:11:47.074: INFO: Waiting for pod pod-configmaps-c3679f8b-6dde-408c-ba39-481bc4749e45 to disappear Mar 12 19:11:47.078: INFO: Pod pod-configmaps-c3679f8b-6dde-408c-ba39-481bc4749e45 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:11:47.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9493" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":672,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:11:47.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 19:11:47.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3310' Mar 12 19:11:47.407: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 19:11:47.407: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793 Mar 12 19:11:47.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-3310' Mar 12 19:11:47.548: INFO: stderr: "" Mar 12 19:11:47.548: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:11:47.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3310" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":43,"skipped":674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:11:47.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:11:47.628: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 12 19:11:52.631: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 12 19:11:52.631: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 12 19:11:54.635: INFO: Creating deployment "test-rollover-deployment" Mar 12 19:11:54.646: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 12 19:11:56.652: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 12 19:11:56.658: INFO: Ensure that both replica sets have 1 created replica Mar 12 19:11:56.662: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 12 19:11:56.667: INFO: Updating deployment test-rollover-deployment Mar 12 19:11:56.667: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 12 19:11:58.679: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 12 19:11:58.685: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 12 19:11:58.692: INFO: all replica sets need to contain the pod-template-hash label Mar 12 19:11:58.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637116, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 19:12:00.700: INFO: all replica sets need to contain the pod-template-hash label Mar 12 19:12:00.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637118, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 19:12:02.700: INFO: all replica sets need to contain the pod-template-hash label Mar 12 19:12:02.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637118, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 19:12:04.700: INFO: all replica sets need to contain the pod-template-hash label Mar 12 19:12:04.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637118, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 19:12:06.699: INFO: all replica sets need to contain the pod-template-hash label Mar 12 19:12:06.699: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637118, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 19:12:08.700: INFO: all replica sets need to contain the pod-template-hash label Mar 12 19:12:08.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637118, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637114, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 19:12:10.700: INFO: Mar 12 19:12:10.700: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 12 19:12:10.708: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7529 /apis/apps/v1/namespaces/deployment-7529/deployments/test-rollover-deployment 10477cf5-6704-4f4b-a95d-e258468b13e0 1197847 2 2020-03-12 19:11:54 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003aa1188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-12 19:11:54 +0000 UTC,LastTransitionTime:2020-03-12 19:11:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-12 19:12:08 +0000 UTC,LastTransitionTime:2020-03-12 19:11:54 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 12 19:12:10.711: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-7529 /apis/apps/v1/namespaces/deployment-7529/replicasets/test-rollover-deployment-574d6dfbff 4432f7fd-ef29-42d1-bc8a-3709ed7c25d3 1197836 2 2020-03-12 19:11:56 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 10477cf5-6704-4f4b-a95d-e258468b13e0 0xc003514d07 0xc003514d08}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003514d78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 12 19:12:10.711: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 12 19:12:10.711: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7529 /apis/apps/v1/namespaces/deployment-7529/replicasets/test-rollover-controller a3b1efa5-dd12-4474-a059-f660e824ca80 1197846 2 2020-03-12 19:11:47 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 10477cf5-6704-4f4b-a95d-e258468b13e0 0xc003514c37 0xc003514c38}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003514c98 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 19:12:10.711: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-7529 /apis/apps/v1/namespaces/deployment-7529/replicasets/test-rollover-deployment-f6c94f66c 72afe082-0fac-466f-8746-1bcf5da4b187 1197790 2 2020-03-12 19:11:54 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 10477cf5-6704-4f4b-a95d-e258468b13e0 0xc003514de0 0xc003514de1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003514e58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 19:12:10.715: INFO: Pod "test-rollover-deployment-574d6dfbff-cvkqb" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-cvkqb test-rollover-deployment-574d6dfbff- deployment-7529 /api/v1/namespaces/deployment-7529/pods/test-rollover-deployment-574d6dfbff-cvkqb 50bbfc3e-5507-416b-abb2-defbcec93d4a 1197804 0 2020-03-12 19:11:56 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 4432f7fd-ef29-42d1-bc8a-3709ed7c25d3 0xc003515377 0xc003515378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2t284,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2t284,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2t284,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:11:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:11:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:11:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:11:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.136,StartTime:2020-03-12 19:11:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:11:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://da973619a38871716f481a8ee947b18bbdc841c80ed41080d8f0d23b1cd24455,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:12:10.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7529" for this suite. • [SLOW TEST:23.165 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":44,"skipped":712,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:12:10.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 12 19:12:15.882: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:12:15.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1201" for this suite. • [SLOW TEST:5.283 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":45,"skipped":728,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:12:16.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2116 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 19:12:16.094: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 12 19:12:40.224: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.139:8080/dial?request=hostname&protocol=udp&host=10.244.2.140&port=8081&tries=1'] Namespace:pod-network-test-2116 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:12:40.224: INFO: >>> kubeConfig: /root/.kube/config I0312 19:12:40.250625 6 log.go:172] (0xc00173a580) (0xc0015b7cc0) Create stream I0312 19:12:40.250665 6 log.go:172] (0xc00173a580) (0xc0015b7cc0) Stream added, broadcasting: 1 I0312 19:12:40.252888 6 log.go:172] (0xc00173a580) Reply frame received for 1 I0312 19:12:40.252920 6 log.go:172] (0xc00173a580) (0xc001f70640) Create stream I0312 19:12:40.252931 6 log.go:172] (0xc00173a580) (0xc001f70640) Stream added, broadcasting: 3 I0312 19:12:40.253798 6 log.go:172] (0xc00173a580) Reply frame received for 3 I0312 19:12:40.253822 6 log.go:172] (0xc00173a580) (0xc0015b7d60) Create stream I0312 19:12:40.253830 6 log.go:172] (0xc00173a580) (0xc0015b7d60) Stream added, broadcasting: 5 I0312 19:12:40.254665 6 log.go:172] (0xc00173a580) Reply frame received for 5 I0312 19:12:40.312361 6 log.go:172] (0xc00173a580) Data frame received for 3 I0312 19:12:40.312382 6 log.go:172] (0xc001f70640) (3) Data frame handling I0312 19:12:40.312392 6 log.go:172] (0xc001f70640) (3) Data frame sent I0312 19:12:40.313207 6 log.go:172] (0xc00173a580) Data frame received for 3 I0312 19:12:40.313225 6 log.go:172] (0xc001f70640) (3) Data frame handling I0312 19:12:40.313246 6 log.go:172] (0xc00173a580) Data frame received for 5 I0312 19:12:40.313256 6 log.go:172] (0xc0015b7d60) (5) Data frame handling I0312 19:12:40.314319 6 log.go:172] (0xc00173a580) Data frame received for 1 I0312 19:12:40.314348 6 log.go:172] (0xc0015b7cc0) (1) Data frame handling I0312 19:12:40.314367 6 log.go:172] (0xc0015b7cc0) (1) Data frame sent I0312 19:12:40.314421 6 log.go:172] (0xc00173a580) (0xc0015b7cc0) Stream removed, broadcasting: 1 I0312 19:12:40.314492 6 log.go:172] (0xc00173a580) (0xc0015b7cc0) Stream removed, broadcasting: 1 I0312 19:12:40.314508 6 log.go:172] (0xc00173a580) (0xc001f70640) Stream removed, broadcasting: 3 I0312 19:12:40.314633 6 log.go:172] (0xc00173a580) Go away received I0312 19:12:40.314658 6 log.go:172] (0xc00173a580) (0xc0015b7d60) Stream removed, broadcasting: 5 Mar 12 19:12:40.314: INFO: Waiting for responses: map[] Mar 12 19:12:40.317: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.139:8080/dial?request=hostname&protocol=udp&host=10.244.1.138&port=8081&tries=1'] Namespace:pod-network-test-2116 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:12:40.317: INFO: >>> kubeConfig: /root/.kube/config I0312 19:12:40.337434 6 log.go:172] (0xc0024fea50) (0xc001f708c0) Create stream I0312 19:12:40.337453 6 log.go:172] (0xc0024fea50) (0xc001f708c0) Stream added, broadcasting: 1 I0312 19:12:40.339354 6 log.go:172] (0xc0024fea50) Reply frame received for 1 I0312 19:12:40.339381 6 log.go:172] (0xc0024fea50) (0xc0023cb540) Create stream I0312 19:12:40.339390 6 log.go:172] (0xc0024fea50) (0xc0023cb540) Stream added, broadcasting: 3 I0312 19:12:40.340158 6 log.go:172] (0xc0024fea50) Reply frame received for 3 I0312 19:12:40.340184 6 log.go:172] (0xc0024fea50) (0xc0021da000) Create stream I0312 19:12:40.340197 6 log.go:172] (0xc0024fea50) (0xc0021da000) Stream added, broadcasting: 5 I0312 19:12:40.340925 6 log.go:172] (0xc0024fea50) Reply frame received for 5 I0312 19:12:40.399608 6 log.go:172] (0xc0024fea50) Data frame received for 3 I0312 19:12:40.399629 6 log.go:172] (0xc0023cb540) (3) Data frame handling I0312 19:12:40.399657 6 log.go:172] (0xc0023cb540) (3) Data frame sent I0312 19:12:40.399998 6 log.go:172] (0xc0024fea50) Data frame received for 5 I0312 19:12:40.400013 6 log.go:172] (0xc0021da000) (5) Data frame handling I0312 19:12:40.400045 6 log.go:172] (0xc0024fea50) Data frame received for 3 I0312 19:12:40.400068 6 log.go:172] (0xc0023cb540) (3) Data frame handling I0312 19:12:40.401500 6 log.go:172] (0xc0024fea50) Data frame received for 1 I0312 19:12:40.401516 6 log.go:172] (0xc001f708c0) (1) Data frame handling I0312 19:12:40.401528 6 log.go:172] (0xc001f708c0) (1) Data frame sent I0312 19:12:40.401537 6 log.go:172] (0xc0024fea50) (0xc001f708c0) Stream removed, broadcasting: 1 I0312 19:12:40.401548 6 log.go:172] (0xc0024fea50) Go away received I0312 19:12:40.401711 6 log.go:172] (0xc0024fea50) (0xc001f708c0) Stream removed, broadcasting: 1 I0312 19:12:40.401731 6 log.go:172] (0xc0024fea50) (0xc0023cb540) Stream removed, broadcasting: 3 I0312 19:12:40.401738 6 log.go:172] (0xc0024fea50) (0xc0021da000) Stream removed, broadcasting: 5 Mar 12 19:12:40.401: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:12:40.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2116" for this suite. • [SLOW TEST:24.411 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":728,"failed":0} [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:12:40.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-4e786d96-76ce-43cb-99a2-987585cce5b5 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:12:42.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6856" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":728,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:12:42.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-2ccae9db-9beb-4972-8869-3366983c3927 STEP: Creating a pod to test consume secrets Mar 12 19:12:42.606: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1c376925-37a7-4e1a-aaa3-a76b3cef4eca" in namespace "projected-8162" to be "success or failure" Mar 12 19:12:42.623: INFO: Pod "pod-projected-secrets-1c376925-37a7-4e1a-aaa3-a76b3cef4eca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.8737ms Mar 12 19:12:44.636: INFO: Pod "pod-projected-secrets-1c376925-37a7-4e1a-aaa3-a76b3cef4eca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.030264882s STEP: Saw pod success Mar 12 19:12:44.636: INFO: Pod "pod-projected-secrets-1c376925-37a7-4e1a-aaa3-a76b3cef4eca" satisfied condition "success or failure" Mar 12 19:12:44.640: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-1c376925-37a7-4e1a-aaa3-a76b3cef4eca container projected-secret-volume-test: STEP: delete the pod Mar 12 19:12:44.683: INFO: Waiting for pod pod-projected-secrets-1c376925-37a7-4e1a-aaa3-a76b3cef4eca to disappear Mar 12 19:12:44.689: INFO: Pod pod-projected-secrets-1c376925-37a7-4e1a-aaa3-a76b3cef4eca no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:12:44.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8162" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":737,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:12:44.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:12:45.447: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:12:48.483: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:12:48.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1481-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:12:49.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9488" for this suite. STEP: Destroying namespace "webhook-9488-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.069 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":49,"skipped":756,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:12:49.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-fbb87c03-332e-4616-977e-e248ef812c62 in namespace container-probe-9899 Mar 12 19:12:51.867: INFO: Started pod liveness-fbb87c03-332e-4616-977e-e248ef812c62 in namespace container-probe-9899 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 19:12:51.870: INFO: Initial restart count of pod liveness-fbb87c03-332e-4616-977e-e248ef812c62 is 0 Mar 12 19:13:17.944: INFO: Restart count of pod container-probe-9899/liveness-fbb87c03-332e-4616-977e-e248ef812c62 is now 1 (26.073993918s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:13:18.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9899" for this suite. • [SLOW TEST:28.258 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":765,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:13:18.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464 STEP: creating an pod Mar 12 19:13:18.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-3813 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 12 19:13:18.182: INFO: stderr: "" Mar 12 19:13:18.182: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 12 19:13:18.182: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 12 19:13:18.182: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3813" to be "running and ready, or succeeded" Mar 12 19:13:18.241: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 59.090184ms Mar 12 19:13:20.244: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.062043056s Mar 12 19:13:20.244: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 12 19:13:20.244: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 12 19:13:20.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3813' Mar 12 19:13:20.350: INFO: stderr: "" Mar 12 19:13:20.350: INFO: stdout: "I0312 19:13:19.360875 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/2cbr 449\nI0312 19:13:19.561098 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/bnlx 383\nI0312 19:13:19.761074 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/8pb 336\nI0312 19:13:19.961072 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/pb9 305\nI0312 19:13:20.160977 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/hp2h 594\n" STEP: limiting log lines Mar 12 19:13:20.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3813 --tail=1' Mar 12 19:13:20.455: INFO: stderr: "" Mar 12 19:13:20.455: INFO: stdout: "I0312 19:13:20.361065 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/7mlh 456\n" Mar 12 19:13:20.455: INFO: got output "I0312 19:13:20.361065 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/7mlh 456\n" STEP: limiting log bytes Mar 12 19:13:20.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3813 --limit-bytes=1' Mar 12 19:13:20.540: INFO: stderr: "" Mar 12 19:13:20.540: INFO: stdout: "I" Mar 12 19:13:20.540: INFO: got output "I" STEP: exposing timestamps Mar 12 19:13:20.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3813 --tail=1 --timestamps' Mar 12 19:13:20.609: INFO: stderr: "" Mar 12 19:13:20.609: INFO: stdout: "2020-03-12T19:13:20.561086475Z I0312 19:13:20.560994 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/wrdt 597\n" Mar 12 19:13:20.609: INFO: got output "2020-03-12T19:13:20.561086475Z I0312 19:13:20.560994 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/wrdt 597\n" STEP: restricting to a time range Mar 12 19:13:23.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3813 --since=1s' Mar 12 19:13:23.217: INFO: stderr: "" Mar 12 19:13:23.217: INFO: stdout: "I0312 19:13:22.361052 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/mn9 496\nI0312 19:13:22.561040 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/vkt 274\nI0312 19:13:22.761046 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/dkkm 456\nI0312 19:13:22.961019 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/9bcq 472\nI0312 19:13:23.161010 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/2hq 412\n" Mar 12 19:13:23.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3813 --since=24h' Mar 12 19:13:23.308: INFO: stderr: "" Mar 12 19:13:23.308: INFO: stdout: "I0312 19:13:19.360875 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/2cbr 449\nI0312 19:13:19.561098 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/bnlx 383\nI0312 19:13:19.761074 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/8pb 336\nI0312 19:13:19.961072 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/pb9 305\nI0312 19:13:20.160977 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/hp2h 594\nI0312 19:13:20.361065 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/7mlh 456\nI0312 19:13:20.560994 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/wrdt 597\nI0312 19:13:20.761025 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/2wj9 574\nI0312 19:13:20.961077 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/qvv 251\nI0312 19:13:21.161084 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/g68l 287\nI0312 19:13:21.361055 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/mmb7 545\nI0312 19:13:21.561048 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/kmpx 418\nI0312 19:13:21.761094 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/8m9p 250\nI0312 19:13:21.961096 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/fb84 299\nI0312 19:13:22.161049 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/j6w 267\nI0312 19:13:22.361052 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/mn9 496\nI0312 19:13:22.561040 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/vkt 274\nI0312 19:13:22.761046 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/dkkm 456\nI0312 19:13:22.961019 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/9bcq 472\nI0312 19:13:23.161010 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/2hq 412\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470 Mar 12 19:13:23.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3813' Mar 12 19:13:36.068: INFO: stderr: "" Mar 12 19:13:36.068: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:13:36.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3813" for this suite. • [SLOW TEST:18.077 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":51,"skipped":772,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:13:36.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:13:36.175: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 12 19:13:36.183: INFO: Number of nodes with available pods: 0 Mar 12 19:13:36.183: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 12 19:13:36.247: INFO: Number of nodes with available pods: 0 Mar 12 19:13:36.247: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:13:37.252: INFO: Number of nodes with available pods: 0 Mar 12 19:13:37.252: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:13:38.250: INFO: Number of nodes with available pods: 1 Mar 12 19:13:38.250: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 12 19:13:38.275: INFO: Number of nodes with available pods: 1 Mar 12 19:13:38.275: INFO: Number of running nodes: 0, number of available pods: 1 Mar 12 19:13:39.279: INFO: Number of nodes with available pods: 0 Mar 12 19:13:39.279: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 12 19:13:39.319: INFO: Number of nodes with available pods: 0 Mar 12 19:13:39.319: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:13:40.323: INFO: Number of nodes with available pods: 0 Mar 12 19:13:40.323: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:13:41.337: INFO: Number of nodes with available pods: 0 Mar 12 19:13:41.337: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:13:42.322: INFO: Number of nodes with available pods: 0 Mar 12 19:13:42.322: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:13:43.349: INFO: Number of nodes with available pods: 0 Mar 12 19:13:43.349: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:13:44.322: INFO: Number of nodes with available pods: 1 Mar 12 19:13:44.322: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2783, will wait for the garbage collector to delete the pods Mar 12 19:13:44.383: INFO: Deleting DaemonSet.extensions daemon-set took: 5.347074ms Mar 12 19:13:44.684: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.212549ms Mar 12 19:13:48.307: INFO: Number of nodes with available pods: 0 Mar 12 19:13:48.307: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 19:13:48.311: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2783/daemonsets","resourceVersion":"1198528"},"items":null} Mar 12 19:13:48.314: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2783/pods","resourceVersion":"1198528"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:13:48.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2783" for this suite. • [SLOW TEST:12.252 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":52,"skipped":781,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:13:48.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:13:55.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7210" for this suite. • [SLOW TEST:7.061 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":53,"skipped":782,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:13:55.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 12 19:13:57.483: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 12 19:14:07.601: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:14:07.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7229" for this suite. • [SLOW TEST:12.196 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":54,"skipped":796,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:14:07.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 12 19:14:07.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:14:07.761: INFO: Number of nodes with available pods: 0 Mar 12 19:14:07.761: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:14:08.766: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:14:08.769: INFO: Number of nodes with available pods: 0 Mar 12 19:14:08.769: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:14:09.769: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:14:09.775: INFO: Number of nodes with available pods: 2 Mar 12 19:14:09.775: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 12 19:14:09.804: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:14:09.817: INFO: Number of nodes with available pods: 2 Mar 12 19:14:09.817: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-752, will wait for the garbage collector to delete the pods Mar 12 19:14:10.933: INFO: Deleting DaemonSet.extensions daemon-set took: 11.718833ms Mar 12 19:14:11.234: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.254035ms Mar 12 19:14:26.137: INFO: Number of nodes with available pods: 0 Mar 12 19:14:26.137: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 19:14:26.139: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-752/daemonsets","resourceVersion":"1198752"},"items":null} Mar 12 19:14:26.141: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-752/pods","resourceVersion":"1198752"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:14:26.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-752" for this suite. • [SLOW TEST:18.543 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":55,"skipped":809,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:14:26.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-r65p STEP: Creating a pod to test atomic-volume-subpath Mar 12 19:14:26.269: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-r65p" in namespace "subpath-5205" to be "success or failure" Mar 12 19:14:26.272: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Pending", Reason="", readiness=false. Elapsed: 3.549696ms Mar 12 19:14:28.296: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026775093s Mar 12 19:14:30.301: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Running", Reason="", readiness=true. Elapsed: 4.032547388s Mar 12 19:14:32.326: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Running", Reason="", readiness=true. Elapsed: 6.05677167s Mar 12 19:14:34.328: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Running", Reason="", readiness=true. Elapsed: 8.059206644s Mar 12 19:14:36.331: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Running", Reason="", readiness=true. Elapsed: 10.062152169s Mar 12 19:14:38.334: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Running", Reason="", readiness=true. Elapsed: 12.064625581s Mar 12 19:14:40.337: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Running", Reason="", readiness=true. Elapsed: 14.067918322s Mar 12 19:14:42.339: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Running", Reason="", readiness=true. Elapsed: 16.070065459s Mar 12 19:14:44.347: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Running", Reason="", readiness=true. Elapsed: 18.077871913s Mar 12 19:14:46.350: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Running", Reason="", readiness=true. Elapsed: 20.08127405s Mar 12 19:14:48.353: INFO: Pod "pod-subpath-test-downwardapi-r65p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.084371337s STEP: Saw pod success Mar 12 19:14:48.353: INFO: Pod "pod-subpath-test-downwardapi-r65p" satisfied condition "success or failure" Mar 12 19:14:48.356: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-r65p container test-container-subpath-downwardapi-r65p: STEP: delete the pod Mar 12 19:14:48.396: INFO: Waiting for pod pod-subpath-test-downwardapi-r65p to disappear Mar 12 19:14:48.412: INFO: Pod pod-subpath-test-downwardapi-r65p no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-r65p Mar 12 19:14:48.412: INFO: Deleting pod "pod-subpath-test-downwardapi-r65p" in namespace "subpath-5205" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:14:48.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5205" for this suite. • [SLOW TEST:22.267 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":56,"skipped":817,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:14:48.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 12 19:14:48.473: INFO: Waiting up to 5m0s for pod "client-containers-8c26c233-c0f4-4ef1-bef6-4529b05c2eb3" in namespace "containers-952" to be "success or failure" Mar 12 19:14:48.476: INFO: Pod "client-containers-8c26c233-c0f4-4ef1-bef6-4529b05c2eb3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.273416ms Mar 12 19:14:50.479: INFO: Pod "client-containers-8c26c233-c0f4-4ef1-bef6-4529b05c2eb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006151646s STEP: Saw pod success Mar 12 19:14:50.479: INFO: Pod "client-containers-8c26c233-c0f4-4ef1-bef6-4529b05c2eb3" satisfied condition "success or failure" Mar 12 19:14:50.481: INFO: Trying to get logs from node jerma-worker pod client-containers-8c26c233-c0f4-4ef1-bef6-4529b05c2eb3 container test-container: STEP: delete the pod Mar 12 19:14:50.495: INFO: Waiting for pod client-containers-8c26c233-c0f4-4ef1-bef6-4529b05c2eb3 to disappear Mar 12 19:14:50.500: INFO: Pod client-containers-8c26c233-c0f4-4ef1-bef6-4529b05c2eb3 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:14:50.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-952" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":849,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:14:50.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 12 19:14:51.219: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 12 19:14:53.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637291, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637291, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637291, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719637291, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:14:56.258: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:14:56.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:14:57.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8323" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.998 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":58,"skipped":885,"failed":0} SSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:14:57.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 12 19:15:02.106: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3990 pod-service-account-36a0bbd6-3960-4acc-bbe1-0061f986854d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 12 19:15:04.263: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3990 pod-service-account-36a0bbd6-3960-4acc-bbe1-0061f986854d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 12 19:15:04.424: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3990 pod-service-account-36a0bbd6-3960-4acc-bbe1-0061f986854d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:15:04.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3990" for this suite. • [SLOW TEST:7.120 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":59,"skipped":890,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:15:04.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:15:04.669: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:15:05.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7953" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":60,"skipped":892,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:15:05.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:15:31.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2734" for this suite. • [SLOW TEST:25.310 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":934,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:15:31.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 12 19:15:31.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5159' Mar 12 19:15:31.337: INFO: stderr: "" Mar 12 19:15:31.337: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 19:15:31.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5159' Mar 12 19:15:31.440: INFO: stderr: "" Mar 12 19:15:31.440: INFO: stdout: "update-demo-nautilus-8kjd7 update-demo-nautilus-w9jl9 " Mar 12 19:15:31.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kjd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5159' Mar 12 19:15:31.502: INFO: stderr: "" Mar 12 19:15:31.502: INFO: stdout: "" Mar 12 19:15:31.502: INFO: update-demo-nautilus-8kjd7 is created but not running Mar 12 19:15:36.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5159' Mar 12 19:15:36.578: INFO: stderr: "" Mar 12 19:15:36.578: INFO: stdout: "update-demo-nautilus-8kjd7 update-demo-nautilus-w9jl9 " Mar 12 19:15:36.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kjd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5159' Mar 12 19:15:36.653: INFO: stderr: "" Mar 12 19:15:36.653: INFO: stdout: "true" Mar 12 19:15:36.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kjd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5159' Mar 12 19:15:36.726: INFO: stderr: "" Mar 12 19:15:36.726: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 19:15:36.726: INFO: validating pod update-demo-nautilus-8kjd7 Mar 12 19:15:36.729: INFO: got data: { "image": "nautilus.jpg" } Mar 12 19:15:36.729: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 19:15:36.729: INFO: update-demo-nautilus-8kjd7 is verified up and running Mar 12 19:15:36.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w9jl9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5159' Mar 12 19:15:36.811: INFO: stderr: "" Mar 12 19:15:36.811: INFO: stdout: "true" Mar 12 19:15:36.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w9jl9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5159' Mar 12 19:15:36.879: INFO: stderr: "" Mar 12 19:15:36.879: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 19:15:36.879: INFO: validating pod update-demo-nautilus-w9jl9 Mar 12 19:15:36.882: INFO: got data: { "image": "nautilus.jpg" } Mar 12 19:15:36.882: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 19:15:36.882: INFO: update-demo-nautilus-w9jl9 is verified up and running STEP: rolling-update to new replication controller Mar 12 19:15:36.883: INFO: scanned /root for discovery docs: Mar 12 19:15:36.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5159' Mar 12 19:15:59.369: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 12 19:15:59.369: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 19:15:59.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5159' Mar 12 19:15:59.462: INFO: stderr: "" Mar 12 19:15:59.462: INFO: stdout: "update-demo-kitten-5cbhm update-demo-kitten-jzrxt " Mar 12 19:15:59.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5cbhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5159' Mar 12 19:15:59.529: INFO: stderr: "" Mar 12 19:15:59.529: INFO: stdout: "true" Mar 12 19:15:59.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5cbhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5159' Mar 12 19:15:59.590: INFO: stderr: "" Mar 12 19:15:59.590: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 12 19:15:59.590: INFO: validating pod update-demo-kitten-5cbhm Mar 12 19:15:59.593: INFO: got data: { "image": "kitten.jpg" } Mar 12 19:15:59.593: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 12 19:15:59.593: INFO: update-demo-kitten-5cbhm is verified up and running Mar 12 19:15:59.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jzrxt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5159' Mar 12 19:15:59.656: INFO: stderr: "" Mar 12 19:15:59.656: INFO: stdout: "true" Mar 12 19:15:59.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jzrxt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5159' Mar 12 19:15:59.725: INFO: stderr: "" Mar 12 19:15:59.725: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 12 19:15:59.725: INFO: validating pod update-demo-kitten-jzrxt Mar 12 19:15:59.728: INFO: got data: { "image": "kitten.jpg" } Mar 12 19:15:59.728: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 12 19:15:59.728: INFO: update-demo-kitten-jzrxt is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:15:59.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5159" for this suite. • [SLOW TEST:28.706 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":62,"skipped":938,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:15:59.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:16:00.498: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:16:03.548: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:16:13.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3934" for this suite. STEP: Destroying namespace "webhook-3934-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.045 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":63,"skipped":957,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:16:13.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 12 19:16:13.821: INFO: Waiting up to 5m0s for pod "pod-d801975a-c0be-4761-82e5-bd0114fa2df3" in namespace "emptydir-6043" to be "success or failure" Mar 12 19:16:13.825: INFO: Pod "pod-d801975a-c0be-4761-82e5-bd0114fa2df3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.911602ms Mar 12 19:16:15.829: INFO: Pod "pod-d801975a-c0be-4761-82e5-bd0114fa2df3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007590876s STEP: Saw pod success Mar 12 19:16:15.829: INFO: Pod "pod-d801975a-c0be-4761-82e5-bd0114fa2df3" satisfied condition "success or failure" Mar 12 19:16:15.832: INFO: Trying to get logs from node jerma-worker2 pod pod-d801975a-c0be-4761-82e5-bd0114fa2df3 container test-container: STEP: delete the pod Mar 12 19:16:15.897: INFO: Waiting for pod pod-d801975a-c0be-4761-82e5-bd0114fa2df3 to disappear Mar 12 19:16:15.904: INFO: Pod pod-d801975a-c0be-4761-82e5-bd0114fa2df3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:16:15.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6043" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":961,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:16:15.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 19:16:15.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-447' Mar 12 19:16:16.075: INFO: stderr: "" Mar 12 19:16:16.075: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866 Mar 12 19:16:16.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-447' Mar 12 19:16:26.003: INFO: stderr: "" Mar 12 19:16:26.003: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:16:26.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-447" for this suite. • [SLOW TEST:10.098 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1857 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":65,"skipped":977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:16:26.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 12 19:16:26.080: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:16:30.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5313" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":66,"skipped":1003,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:16:30.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:16:41.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2049" for this suite. • [SLOW TEST:11.136 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":67,"skipped":1015,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:16:41.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:16:41.626: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1db16134-8e0a-4680-8f03-a25045f2e3da", Controller:(*bool)(0xc00541e292), BlockOwnerDeletion:(*bool)(0xc00541e293)}} Mar 12 19:16:41.635: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"03205e75-ee53-43f4-b139-0021967329ff", Controller:(*bool)(0xc00351521a), BlockOwnerDeletion:(*bool)(0xc00351521b)}} Mar 12 19:16:41.711: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9c252181-7472-45a0-8614-b4de6b38622e", Controller:(*bool)(0xc0034ead1a), BlockOwnerDeletion:(*bool)(0xc0034ead1b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:16:46.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4971" for this suite. • [SLOW TEST:5.241 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":68,"skipped":1032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:16:46.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 12 19:16:50.859: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 19:16:50.879: INFO: Pod pod-with-prestop-http-hook still exists Mar 12 19:16:52.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 19:16:52.882: INFO: Pod pod-with-prestop-http-hook still exists Mar 12 19:16:54.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 19:16:54.882: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:16:54.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3209" for this suite. • [SLOW TEST:8.164 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1061,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:16:54.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 12 19:16:54.972: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 12 19:17:02.031: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:17:02.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-388" for this suite. • [SLOW TEST:7.148 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:17:02.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0312 19:17:32.645559 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 19:17:32.645: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:17:32.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7192" for this suite. • [SLOW TEST:30.607 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":71,"skipped":1108,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:17:32.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:17:32.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3295" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":72,"skipped":1124,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:17:32.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:17:49.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8769" for this suite. • [SLOW TEST:16.305 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":73,"skipped":1133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:17:49.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-8251f8c3-98be-48e8-a721-a6f0966c7bb2 STEP: Creating a pod to test consume configMaps Mar 12 19:17:49.150: INFO: Waiting up to 5m0s for pod "pod-configmaps-74a4a268-a251-4eb5-9d67-acae95f6e106" in namespace "configmap-5906" to be "success or failure" Mar 12 19:17:49.179: INFO: Pod "pod-configmaps-74a4a268-a251-4eb5-9d67-acae95f6e106": Phase="Pending", Reason="", readiness=false. Elapsed: 29.7329ms Mar 12 19:17:51.184: INFO: Pod "pod-configmaps-74a4a268-a251-4eb5-9d67-acae95f6e106": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034205102s STEP: Saw pod success Mar 12 19:17:51.184: INFO: Pod "pod-configmaps-74a4a268-a251-4eb5-9d67-acae95f6e106" satisfied condition "success or failure" Mar 12 19:17:51.187: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-74a4a268-a251-4eb5-9d67-acae95f6e106 container configmap-volume-test: STEP: delete the pod Mar 12 19:17:51.239: INFO: Waiting for pod pod-configmaps-74a4a268-a251-4eb5-9d67-acae95f6e106 to disappear Mar 12 19:17:51.244: INFO: Pod pod-configmaps-74a4a268-a251-4eb5-9d67-acae95f6e106 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:17:51.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5906" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1164,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:17:51.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:17:51.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5954" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1165,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:17:51.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2963 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 12 19:17:51.495: INFO: Found 0 stateful pods, waiting for 3 Mar 12 19:18:01.501: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 19:18:01.501: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 19:18:01.501: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 12 19:18:01.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2963 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 19:18:01.754: INFO: stderr: "I0312 19:18:01.630183 1340 log.go:172] (0xc0009af3f0) (0xc00097e640) Create stream\nI0312 19:18:01.630232 1340 log.go:172] (0xc0009af3f0) (0xc00097e640) Stream added, broadcasting: 1\nI0312 19:18:01.633580 1340 log.go:172] (0xc0009af3f0) Reply frame received for 1\nI0312 19:18:01.633615 1340 log.go:172] (0xc0009af3f0) (0xc00072e640) Create stream\nI0312 19:18:01.633626 1340 log.go:172] (0xc0009af3f0) (0xc00072e640) Stream added, broadcasting: 3\nI0312 19:18:01.634368 1340 log.go:172] (0xc0009af3f0) Reply frame received for 3\nI0312 19:18:01.634398 1340 log.go:172] (0xc0009af3f0) (0xc0005f7400) Create stream\nI0312 19:18:01.634411 1340 log.go:172] (0xc0009af3f0) (0xc0005f7400) Stream added, broadcasting: 5\nI0312 19:18:01.635227 1340 log.go:172] (0xc0009af3f0) Reply frame received for 5\nI0312 19:18:01.708789 1340 log.go:172] (0xc0009af3f0) Data frame received for 5\nI0312 19:18:01.708811 1340 log.go:172] (0xc0005f7400) (5) Data frame handling\nI0312 19:18:01.708826 1340 log.go:172] (0xc0005f7400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 19:18:01.748689 1340 log.go:172] (0xc0009af3f0) Data frame received for 3\nI0312 19:18:01.748720 1340 log.go:172] (0xc00072e640) (3) Data frame handling\nI0312 19:18:01.748737 1340 log.go:172] (0xc00072e640) (3) Data frame sent\nI0312 19:18:01.748747 1340 log.go:172] (0xc0009af3f0) Data frame received for 3\nI0312 19:18:01.748754 1340 log.go:172] (0xc00072e640) (3) Data frame handling\nI0312 19:18:01.748824 1340 log.go:172] (0xc0009af3f0) Data frame received for 5\nI0312 19:18:01.748839 1340 log.go:172] (0xc0005f7400) (5) Data frame handling\nI0312 19:18:01.750236 1340 log.go:172] (0xc0009af3f0) Data frame received for 1\nI0312 19:18:01.750257 1340 log.go:172] (0xc00097e640) (1) Data frame handling\nI0312 19:18:01.750267 1340 log.go:172] (0xc00097e640) (1) Data frame sent\nI0312 19:18:01.750281 1340 log.go:172] (0xc0009af3f0) (0xc00097e640) Stream removed, broadcasting: 1\nI0312 19:18:01.750307 1340 log.go:172] (0xc0009af3f0) Go away received\nI0312 19:18:01.750625 1340 log.go:172] (0xc0009af3f0) (0xc00097e640) Stream removed, broadcasting: 1\nI0312 19:18:01.750638 1340 log.go:172] (0xc0009af3f0) (0xc00072e640) Stream removed, broadcasting: 3\nI0312 19:18:01.750644 1340 log.go:172] (0xc0009af3f0) (0xc0005f7400) Stream removed, broadcasting: 5\n" Mar 12 19:18:01.754: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 19:18:01.754: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 12 19:18:11.788: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 12 19:18:21.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2963 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 19:18:22.060: INFO: stderr: "I0312 19:18:21.992509 1360 log.go:172] (0xc0009ceb00) (0xc00092e140) Create stream\nI0312 19:18:21.992543 1360 log.go:172] (0xc0009ceb00) (0xc00092e140) Stream added, broadcasting: 1\nI0312 19:18:21.994297 1360 log.go:172] (0xc0009ceb00) Reply frame received for 1\nI0312 19:18:21.994333 1360 log.go:172] (0xc0009ceb00) (0xc00020d540) Create stream\nI0312 19:18:21.994350 1360 log.go:172] (0xc0009ceb00) (0xc00020d540) Stream added, broadcasting: 3\nI0312 19:18:21.995212 1360 log.go:172] (0xc0009ceb00) Reply frame received for 3\nI0312 19:18:21.995242 1360 log.go:172] (0xc0009ceb00) (0xc000683b80) Create stream\nI0312 19:18:21.995256 1360 log.go:172] (0xc0009ceb00) (0xc000683b80) Stream added, broadcasting: 5\nI0312 19:18:21.996024 1360 log.go:172] (0xc0009ceb00) Reply frame received for 5\nI0312 19:18:22.052798 1360 log.go:172] (0xc0009ceb00) Data frame received for 3\nI0312 19:18:22.052828 1360 log.go:172] (0xc00020d540) (3) Data frame handling\nI0312 19:18:22.052848 1360 log.go:172] (0xc00020d540) (3) Data frame sent\nI0312 19:18:22.052907 1360 log.go:172] (0xc0009ceb00) Data frame received for 3\nI0312 19:18:22.052930 1360 log.go:172] (0xc00020d540) (3) Data frame handling\nI0312 19:18:22.053468 1360 log.go:172] (0xc0009ceb00) Data frame received for 5\nI0312 19:18:22.053482 1360 log.go:172] (0xc000683b80) (5) Data frame handling\nI0312 19:18:22.053490 1360 log.go:172] (0xc000683b80) (5) Data frame sent\nI0312 19:18:22.053497 1360 log.go:172] (0xc0009ceb00) Data frame received for 5\nI0312 19:18:22.053502 1360 log.go:172] (0xc000683b80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 19:18:22.057221 1360 log.go:172] (0xc0009ceb00) Data frame received for 1\nI0312 19:18:22.057233 1360 log.go:172] (0xc00092e140) (1) Data frame handling\nI0312 19:18:22.057239 1360 log.go:172] (0xc00092e140) (1) Data frame sent\nI0312 19:18:22.057247 1360 log.go:172] (0xc0009ceb00) (0xc00092e140) Stream removed, broadcasting: 1\nI0312 19:18:22.057255 1360 log.go:172] (0xc0009ceb00) Go away received\nI0312 19:18:22.057511 1360 log.go:172] (0xc0009ceb00) (0xc00092e140) Stream removed, broadcasting: 1\nI0312 19:18:22.057529 1360 log.go:172] (0xc0009ceb00) (0xc00020d540) Stream removed, broadcasting: 3\nI0312 19:18:22.057538 1360 log.go:172] (0xc0009ceb00) (0xc000683b80) Stream removed, broadcasting: 5\n" Mar 12 19:18:22.060: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 19:18:22.060: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision Mar 12 19:18:42.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2963 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 19:18:42.297: INFO: stderr: "I0312 19:18:42.219394 1380 log.go:172] (0xc000c074a0) (0xc000900780) Create stream\nI0312 19:18:42.219507 1380 log.go:172] (0xc000c074a0) (0xc000900780) Stream added, broadcasting: 1\nI0312 19:18:42.222418 1380 log.go:172] (0xc000c074a0) Reply frame received for 1\nI0312 19:18:42.222446 1380 log.go:172] (0xc000c074a0) (0xc00067a6e0) Create stream\nI0312 19:18:42.222458 1380 log.go:172] (0xc000c074a0) (0xc00067a6e0) Stream added, broadcasting: 3\nI0312 19:18:42.223001 1380 log.go:172] (0xc000c074a0) Reply frame received for 3\nI0312 19:18:42.223022 1380 log.go:172] (0xc000c074a0) (0xc0004e94a0) Create stream\nI0312 19:18:42.223033 1380 log.go:172] (0xc000c074a0) (0xc0004e94a0) Stream added, broadcasting: 5\nI0312 19:18:42.223578 1380 log.go:172] (0xc000c074a0) Reply frame received for 5\nI0312 19:18:42.275510 1380 log.go:172] (0xc000c074a0) Data frame received for 5\nI0312 19:18:42.275532 1380 log.go:172] (0xc0004e94a0) (5) Data frame handling\nI0312 19:18:42.275547 1380 log.go:172] (0xc0004e94a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 19:18:42.293324 1380 log.go:172] (0xc000c074a0) Data frame received for 5\nI0312 19:18:42.293354 1380 log.go:172] (0xc0004e94a0) (5) Data frame handling\nI0312 19:18:42.293376 1380 log.go:172] (0xc000c074a0) Data frame received for 3\nI0312 19:18:42.293387 1380 log.go:172] (0xc00067a6e0) (3) Data frame handling\nI0312 19:18:42.293398 1380 log.go:172] (0xc00067a6e0) (3) Data frame sent\nI0312 19:18:42.293409 1380 log.go:172] (0xc000c074a0) Data frame received for 3\nI0312 19:18:42.293420 1380 log.go:172] (0xc00067a6e0) (3) Data frame handling\nI0312 19:18:42.294358 1380 log.go:172] (0xc000c074a0) Data frame received for 1\nI0312 19:18:42.294396 1380 log.go:172] (0xc000900780) (1) Data frame handling\nI0312 19:18:42.294439 1380 log.go:172] (0xc000900780) (1) Data frame sent\nI0312 19:18:42.294485 1380 log.go:172] (0xc000c074a0) (0xc000900780) Stream removed, broadcasting: 1\nI0312 19:18:42.294506 1380 log.go:172] (0xc000c074a0) Go away received\nI0312 19:18:42.294678 1380 log.go:172] (0xc000c074a0) (0xc000900780) Stream removed, broadcasting: 1\nI0312 19:18:42.294696 1380 log.go:172] (0xc000c074a0) (0xc00067a6e0) Stream removed, broadcasting: 3\nI0312 19:18:42.294702 1380 log.go:172] (0xc000c074a0) (0xc0004e94a0) Stream removed, broadcasting: 5\n" Mar 12 19:18:42.297: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 19:18:42.297: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 19:18:52.320: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 12 19:19:02.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2963 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 19:19:02.582: INFO: stderr: "I0312 19:19:02.507529 1400 log.go:172] (0xc0004afce0) (0xc0008ce780) Create stream\nI0312 19:19:02.507716 1400 log.go:172] (0xc0004afce0) (0xc0008ce780) Stream added, broadcasting: 1\nI0312 19:19:02.511407 1400 log.go:172] (0xc0004afce0) Reply frame received for 1\nI0312 19:19:02.511446 1400 log.go:172] (0xc0004afce0) (0xc00064c780) Create stream\nI0312 19:19:02.511463 1400 log.go:172] (0xc0004afce0) (0xc00064c780) Stream added, broadcasting: 3\nI0312 19:19:02.512186 1400 log.go:172] (0xc0004afce0) Reply frame received for 3\nI0312 19:19:02.512216 1400 log.go:172] (0xc0004afce0) (0xc00041f540) Create stream\nI0312 19:19:02.512228 1400 log.go:172] (0xc0004afce0) (0xc00041f540) Stream added, broadcasting: 5\nI0312 19:19:02.512990 1400 log.go:172] (0xc0004afce0) Reply frame received for 5\nI0312 19:19:02.574377 1400 log.go:172] (0xc0004afce0) Data frame received for 3\nI0312 19:19:02.574408 1400 log.go:172] (0xc00064c780) (3) Data frame handling\nI0312 19:19:02.574421 1400 log.go:172] (0xc00064c780) (3) Data frame sent\nI0312 19:19:02.574458 1400 log.go:172] (0xc0004afce0) Data frame received for 3\nI0312 19:19:02.574466 1400 log.go:172] (0xc00064c780) (3) Data frame handling\nI0312 19:19:02.574485 1400 log.go:172] (0xc0004afce0) Data frame received for 5\nI0312 19:19:02.574493 1400 log.go:172] (0xc00041f540) (5) Data frame handling\nI0312 19:19:02.574502 1400 log.go:172] (0xc00041f540) (5) Data frame sent\nI0312 19:19:02.574510 1400 log.go:172] (0xc0004afce0) Data frame received for 5\nI0312 19:19:02.574517 1400 log.go:172] (0xc00041f540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 19:19:02.578325 1400 log.go:172] (0xc0004afce0) Data frame received for 1\nI0312 19:19:02.578368 1400 log.go:172] (0xc0008ce780) (1) Data frame handling\nI0312 19:19:02.578397 1400 log.go:172] (0xc0008ce780) (1) Data frame sent\nI0312 19:19:02.578450 1400 log.go:172] (0xc0004afce0) (0xc0008ce780) Stream removed, broadcasting: 1\nI0312 19:19:02.578768 1400 log.go:172] (0xc0004afce0) (0xc0008ce780) Stream removed, broadcasting: 1\nI0312 19:19:02.578784 1400 log.go:172] (0xc0004afce0) (0xc00064c780) Stream removed, broadcasting: 3\nI0312 19:19:02.578913 1400 log.go:172] (0xc0004afce0) (0xc00041f540) Stream removed, broadcasting: 5\n" Mar 12 19:19:02.583: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 19:19:02.583: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 19:19:12.626: INFO: Waiting for StatefulSet statefulset-2963/ss2 to complete update Mar 12 19:19:12.626: INFO: Waiting for Pod statefulset-2963/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 19:19:22.635: INFO: Deleting all statefulset in ns statefulset-2963 Mar 12 19:19:22.637: INFO: Scaling statefulset ss2 to 0 Mar 12 19:19:32.655: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 19:19:32.657: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:19:32.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2963" for this suite. • [SLOW TEST:101.297 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":76,"skipped":1170,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:19:32.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 19:19:34.780: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:19:34.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7084" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1181,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:19:34.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 12 19:19:34.906: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:19:34.929: INFO: Number of nodes with available pods: 0 Mar 12 19:19:34.929: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:19:35.957: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:19:35.960: INFO: Number of nodes with available pods: 0 Mar 12 19:19:35.960: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:19:36.939: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:19:36.942: INFO: Number of nodes with available pods: 2 Mar 12 19:19:36.942: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 12 19:19:36.974: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:19:36.977: INFO: Number of nodes with available pods: 1 Mar 12 19:19:36.977: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 19:19:37.995: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:19:38.059: INFO: Number of nodes with available pods: 1 Mar 12 19:19:38.059: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 19:19:38.981: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:19:38.985: INFO: Number of nodes with available pods: 1 Mar 12 19:19:38.985: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 19:19:39.981: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:19:39.983: INFO: Number of nodes with available pods: 1 Mar 12 19:19:39.983: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 19:19:40.981: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:19:40.984: INFO: Number of nodes with available pods: 1 Mar 12 19:19:40.984: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 19:19:41.982: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:19:41.985: INFO: Number of nodes with available pods: 1 Mar 12 19:19:41.985: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 19:19:42.980: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:19:42.982: INFO: Number of nodes with available pods: 2 Mar 12 19:19:42.982: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1624, will wait for the garbage collector to delete the pods Mar 12 19:19:43.038: INFO: Deleting DaemonSet.extensions daemon-set took: 3.63482ms Mar 12 19:19:43.138: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.189797ms Mar 12 19:19:46.742: INFO: Number of nodes with available pods: 0 Mar 12 19:19:46.742: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 19:19:46.745: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1624/daemonsets","resourceVersion":"1200965"},"items":null} Mar 12 19:19:46.747: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1624/pods","resourceVersion":"1200965"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:19:46.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1624" for this suite. • [SLOW TEST:11.960 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":78,"skipped":1184,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:19:46.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:19:46.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd40dd28-7f24-4bdf-b13c-18e5beeb9d34" in namespace "projected-803" to be "success or failure" Mar 12 19:19:46.862: INFO: Pod "downwardapi-volume-fd40dd28-7f24-4bdf-b13c-18e5beeb9d34": Phase="Pending", Reason="", readiness=false. Elapsed: 29.270787ms Mar 12 19:19:48.866: INFO: Pod "downwardapi-volume-fd40dd28-7f24-4bdf-b13c-18e5beeb9d34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.032805989s STEP: Saw pod success Mar 12 19:19:48.866: INFO: Pod "downwardapi-volume-fd40dd28-7f24-4bdf-b13c-18e5beeb9d34" satisfied condition "success or failure" Mar 12 19:19:48.869: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-fd40dd28-7f24-4bdf-b13c-18e5beeb9d34 container client-container: STEP: delete the pod Mar 12 19:19:48.913: INFO: Waiting for pod downwardapi-volume-fd40dd28-7f24-4bdf-b13c-18e5beeb9d34 to disappear Mar 12 19:19:48.919: INFO: Pod downwardapi-volume-fd40dd28-7f24-4bdf-b13c-18e5beeb9d34 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:19:48.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-803" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:19:48.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 12 19:19:51.502: INFO: Successfully updated pod "annotationupdate3e6d55b8-62f9-4e71-ac06-c2a81580f178" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:19:53.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3812" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1219,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:19:53.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 12 19:19:53.610: INFO: Waiting up to 5m0s for pod "pod-54719c93-1a43-493d-bcf4-f63b18668bc9" in namespace "emptydir-469" to be "success or failure" Mar 12 19:19:53.632: INFO: Pod "pod-54719c93-1a43-493d-bcf4-f63b18668bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.744723ms Mar 12 19:19:55.635: INFO: Pod "pod-54719c93-1a43-493d-bcf4-f63b18668bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024734001s Mar 12 19:19:57.638: INFO: Pod "pod-54719c93-1a43-493d-bcf4-f63b18668bc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028039926s STEP: Saw pod success Mar 12 19:19:57.638: INFO: Pod "pod-54719c93-1a43-493d-bcf4-f63b18668bc9" satisfied condition "success or failure" Mar 12 19:19:57.640: INFO: Trying to get logs from node jerma-worker2 pod pod-54719c93-1a43-493d-bcf4-f63b18668bc9 container test-container: STEP: delete the pod Mar 12 19:19:57.690: INFO: Waiting for pod pod-54719c93-1a43-493d-bcf4-f63b18668bc9 to disappear Mar 12 19:19:57.705: INFO: Pod pod-54719c93-1a43-493d-bcf4-f63b18668bc9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:19:57.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-469" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1237,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:19:57.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:19:58.248: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:20:01.281: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:20:01.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2284-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:20:02.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8252" for this suite. STEP: Destroying namespace "webhook-8252-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":82,"skipped":1243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:20:02.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-43 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-43 I0312 19:20:02.682219 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-43, replica count: 2 I0312 19:20:05.732543 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 19:20:05.732: INFO: Creating new exec pod Mar 12 19:20:08.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-43 execpodpdtql -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 12 19:20:08.953: INFO: stderr: "I0312 19:20:08.880730 1421 log.go:172] (0xc000b500b0) (0xc0007774a0) Create stream\nI0312 19:20:08.880778 1421 log.go:172] (0xc000b500b0) (0xc0007774a0) Stream added, broadcasting: 1\nI0312 19:20:08.882983 1421 log.go:172] (0xc000b500b0) Reply frame received for 1\nI0312 19:20:08.883018 1421 log.go:172] (0xc000b500b0) (0xc00066ba40) Create stream\nI0312 19:20:08.883028 1421 log.go:172] (0xc000b500b0) (0xc00066ba40) Stream added, broadcasting: 3\nI0312 19:20:08.883962 1421 log.go:172] (0xc000b500b0) Reply frame received for 3\nI0312 19:20:08.883990 1421 log.go:172] (0xc000b500b0) (0xc0009d2000) Create stream\nI0312 19:20:08.883998 1421 log.go:172] (0xc000b500b0) (0xc0009d2000) Stream added, broadcasting: 5\nI0312 19:20:08.884890 1421 log.go:172] (0xc000b500b0) Reply frame received for 5\nI0312 19:20:08.945869 1421 log.go:172] (0xc000b500b0) Data frame received for 5\nI0312 19:20:08.945898 1421 log.go:172] (0xc0009d2000) (5) Data frame handling\nI0312 19:20:08.945919 1421 log.go:172] (0xc0009d2000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0312 19:20:08.946784 1421 log.go:172] (0xc000b500b0) Data frame received for 3\nI0312 19:20:08.946817 1421 log.go:172] (0xc00066ba40) (3) Data frame handling\nI0312 19:20:08.946843 1421 log.go:172] (0xc000b500b0) Data frame received for 5\nI0312 19:20:08.946856 1421 log.go:172] (0xc0009d2000) (5) Data frame handling\nI0312 19:20:08.946865 1421 log.go:172] (0xc0009d2000) (5) Data frame sent\nI0312 19:20:08.946876 1421 log.go:172] (0xc000b500b0) Data frame received for 5\nI0312 19:20:08.946883 1421 log.go:172] (0xc0009d2000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0312 19:20:08.948642 1421 log.go:172] (0xc000b500b0) Data frame received for 1\nI0312 19:20:08.948671 1421 log.go:172] (0xc0007774a0) (1) Data frame handling\nI0312 19:20:08.948691 1421 log.go:172] (0xc0007774a0) (1) Data frame sent\nI0312 19:20:08.948726 1421 log.go:172] (0xc000b500b0) (0xc0007774a0) Stream removed, broadcasting: 1\nI0312 19:20:08.948751 1421 log.go:172] (0xc000b500b0) Go away received\nI0312 19:20:08.949211 1421 log.go:172] (0xc000b500b0) (0xc0007774a0) Stream removed, broadcasting: 1\nI0312 19:20:08.949231 1421 log.go:172] (0xc000b500b0) (0xc00066ba40) Stream removed, broadcasting: 3\nI0312 19:20:08.949239 1421 log.go:172] (0xc000b500b0) (0xc0009d2000) Stream removed, broadcasting: 5\n" Mar 12 19:20:08.953: INFO: stdout: "" Mar 12 19:20:08.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-43 execpodpdtql -- /bin/sh -x -c nc -zv -t -w 2 10.105.173.121 80' Mar 12 19:20:09.125: INFO: stderr: "I0312 19:20:09.056975 1441 log.go:172] (0xc000b9ef20) (0xc000b946e0) Create stream\nI0312 19:20:09.057013 1441 log.go:172] (0xc000b9ef20) (0xc000b946e0) Stream added, broadcasting: 1\nI0312 19:20:09.060357 1441 log.go:172] (0xc000b9ef20) Reply frame received for 1\nI0312 19:20:09.060396 1441 log.go:172] (0xc000b9ef20) (0xc0006286e0) Create stream\nI0312 19:20:09.060407 1441 log.go:172] (0xc000b9ef20) (0xc0006286e0) Stream added, broadcasting: 3\nI0312 19:20:09.061345 1441 log.go:172] (0xc000b9ef20) Reply frame received for 3\nI0312 19:20:09.061375 1441 log.go:172] (0xc000b9ef20) (0xc0003c74a0) Create stream\nI0312 19:20:09.061387 1441 log.go:172] (0xc000b9ef20) (0xc0003c74a0) Stream added, broadcasting: 5\nI0312 19:20:09.062353 1441 log.go:172] (0xc000b9ef20) Reply frame received for 5\nI0312 19:20:09.119865 1441 log.go:172] (0xc000b9ef20) Data frame received for 3\nI0312 19:20:09.119886 1441 log.go:172] (0xc0006286e0) (3) Data frame handling\nI0312 19:20:09.119912 1441 log.go:172] (0xc000b9ef20) Data frame received for 5\nI0312 19:20:09.119943 1441 log.go:172] (0xc0003c74a0) (5) Data frame handling\nI0312 19:20:09.119961 1441 log.go:172] (0xc0003c74a0) (5) Data frame sent\nI0312 19:20:09.119973 1441 log.go:172] (0xc000b9ef20) Data frame received for 5\nI0312 19:20:09.119983 1441 log.go:172] (0xc0003c74a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.173.121 80\nConnection to 10.105.173.121 80 port [tcp/http] succeeded!\nI0312 19:20:09.120920 1441 log.go:172] (0xc000b9ef20) Data frame received for 1\nI0312 19:20:09.120943 1441 log.go:172] (0xc000b946e0) (1) Data frame handling\nI0312 19:20:09.121026 1441 log.go:172] (0xc000b946e0) (1) Data frame sent\nI0312 19:20:09.121056 1441 log.go:172] (0xc000b9ef20) (0xc000b946e0) Stream removed, broadcasting: 1\nI0312 19:20:09.121074 1441 log.go:172] (0xc000b9ef20) Go away received\nI0312 19:20:09.121491 1441 log.go:172] (0xc000b9ef20) (0xc000b946e0) Stream removed, broadcasting: 1\nI0312 19:20:09.121521 1441 log.go:172] (0xc000b9ef20) (0xc0006286e0) Stream removed, broadcasting: 3\nI0312 19:20:09.121534 1441 log.go:172] (0xc000b9ef20) (0xc0003c74a0) Stream removed, broadcasting: 5\n" Mar 12 19:20:09.125: INFO: stdout: "" Mar 12 19:20:09.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-43 execpodpdtql -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 31336' Mar 12 19:20:09.294: INFO: stderr: "I0312 19:20:09.232117 1463 log.go:172] (0xc000225080) (0xc0006b7b80) Create stream\nI0312 19:20:09.232155 1463 log.go:172] (0xc000225080) (0xc0006b7b80) Stream added, broadcasting: 1\nI0312 19:20:09.233732 1463 log.go:172] (0xc000225080) Reply frame received for 1\nI0312 19:20:09.233756 1463 log.go:172] (0xc000225080) (0xc0005d4000) Create stream\nI0312 19:20:09.233764 1463 log.go:172] (0xc000225080) (0xc0005d4000) Stream added, broadcasting: 3\nI0312 19:20:09.234284 1463 log.go:172] (0xc000225080) Reply frame received for 3\nI0312 19:20:09.234303 1463 log.go:172] (0xc000225080) (0xc0006b7d60) Create stream\nI0312 19:20:09.234310 1463 log.go:172] (0xc000225080) (0xc0006b7d60) Stream added, broadcasting: 5\nI0312 19:20:09.234841 1463 log.go:172] (0xc000225080) Reply frame received for 5\nI0312 19:20:09.290977 1463 log.go:172] (0xc000225080) Data frame received for 3\nI0312 19:20:09.290996 1463 log.go:172] (0xc0005d4000) (3) Data frame handling\nI0312 19:20:09.291008 1463 log.go:172] (0xc000225080) Data frame received for 5\nI0312 19:20:09.291013 1463 log.go:172] (0xc0006b7d60) (5) Data frame handling\nI0312 19:20:09.291018 1463 log.go:172] (0xc0006b7d60) (5) Data frame sent\nI0312 19:20:09.291022 1463 log.go:172] (0xc000225080) Data frame received for 5\nI0312 19:20:09.291026 1463 log.go:172] (0xc0006b7d60) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.4 31336\nConnection to 172.17.0.4 31336 port [tcp/31336] succeeded!\nI0312 19:20:09.291845 1463 log.go:172] (0xc000225080) Data frame received for 1\nI0312 19:20:09.291857 1463 log.go:172] (0xc0006b7b80) (1) Data frame handling\nI0312 19:20:09.291862 1463 log.go:172] (0xc0006b7b80) (1) Data frame sent\nI0312 19:20:09.291875 1463 log.go:172] (0xc000225080) (0xc0006b7b80) Stream removed, broadcasting: 1\nI0312 19:20:09.291902 1463 log.go:172] (0xc000225080) Go away received\nI0312 19:20:09.292083 1463 log.go:172] (0xc000225080) (0xc0006b7b80) Stream removed, broadcasting: 1\nI0312 19:20:09.292092 1463 log.go:172] (0xc000225080) (0xc0005d4000) Stream removed, broadcasting: 3\nI0312 19:20:09.292097 1463 log.go:172] (0xc000225080) (0xc0006b7d60) Stream removed, broadcasting: 5\n" Mar 12 19:20:09.295: INFO: stdout: "" Mar 12 19:20:09.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-43 execpodpdtql -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 31336' Mar 12 19:20:09.449: INFO: stderr: "I0312 19:20:09.389380 1484 log.go:172] (0xc0009d54a0) (0xc000a2c780) Create stream\nI0312 19:20:09.389411 1484 log.go:172] (0xc0009d54a0) (0xc000a2c780) Stream added, broadcasting: 1\nI0312 19:20:09.392089 1484 log.go:172] (0xc0009d54a0) Reply frame received for 1\nI0312 19:20:09.392117 1484 log.go:172] (0xc0009d54a0) (0xc00064a640) Create stream\nI0312 19:20:09.392123 1484 log.go:172] (0xc0009d54a0) (0xc00064a640) Stream added, broadcasting: 3\nI0312 19:20:09.392696 1484 log.go:172] (0xc0009d54a0) Reply frame received for 3\nI0312 19:20:09.392718 1484 log.go:172] (0xc0009d54a0) (0xc000507400) Create stream\nI0312 19:20:09.392725 1484 log.go:172] (0xc0009d54a0) (0xc000507400) Stream added, broadcasting: 5\nI0312 19:20:09.393214 1484 log.go:172] (0xc0009d54a0) Reply frame received for 5\nI0312 19:20:09.443620 1484 log.go:172] (0xc0009d54a0) Data frame received for 3\nI0312 19:20:09.443637 1484 log.go:172] (0xc00064a640) (3) Data frame handling\nI0312 19:20:09.443669 1484 log.go:172] (0xc0009d54a0) Data frame received for 5\nI0312 19:20:09.443699 1484 log.go:172] (0xc000507400) (5) Data frame handling\nI0312 19:20:09.443719 1484 log.go:172] (0xc000507400) (5) Data frame sent\nI0312 19:20:09.443739 1484 log.go:172] (0xc0009d54a0) Data frame received for 5\nI0312 19:20:09.443748 1484 log.go:172] (0xc000507400) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.5 31336\nConnection to 172.17.0.5 31336 port [tcp/31336] succeeded!\nI0312 19:20:09.444601 1484 log.go:172] (0xc0009d54a0) Data frame received for 1\nI0312 19:20:09.444668 1484 log.go:172] (0xc000a2c780) (1) Data frame handling\nI0312 19:20:09.444685 1484 log.go:172] (0xc000a2c780) (1) Data frame sent\nI0312 19:20:09.444696 1484 log.go:172] (0xc0009d54a0) (0xc000a2c780) Stream removed, broadcasting: 1\nI0312 19:20:09.444891 1484 log.go:172] (0xc0009d54a0) Go away received\nI0312 19:20:09.444938 1484 log.go:172] (0xc0009d54a0) (0xc000a2c780) Stream removed, broadcasting: 1\nI0312 19:20:09.444956 1484 log.go:172] (0xc0009d54a0) (0xc00064a640) Stream removed, broadcasting: 3\nI0312 19:20:09.444971 1484 log.go:172] (0xc0009d54a0) (0xc000507400) Stream removed, broadcasting: 5\n" Mar 12 19:20:09.449: INFO: stdout: "" Mar 12 19:20:09.449: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:20:09.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-43" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.183 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":83,"skipped":1268,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:20:09.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:20:09.742: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:20:11.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7916" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1272,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:20:11.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:20:11.891: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:20:17.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2444" for this suite. • [SLOW TEST:6.023 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":85,"skipped":1284,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:20:17.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 12 19:20:21.981: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 19:20:22.001: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 19:20:24.001: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 19:20:24.005: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 19:20:26.001: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 19:20:26.005: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 19:20:28.001: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 19:20:28.005: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 19:20:30.001: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 19:20:30.005: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 19:20:32.001: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 19:20:32.005: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 19:20:34.001: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 19:20:34.005: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 19:20:36.001: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 19:20:36.005: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 19:20:38.001: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 19:20:38.004: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:20:38.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5002" for this suite. • [SLOW TEST:20.163 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1294,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:20:38.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9956 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 12 19:20:38.125: INFO: Found 0 stateful pods, waiting for 3 Mar 12 19:20:48.129: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 19:20:48.129: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 19:20:48.129: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 12 19:20:48.154: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 12 19:20:58.189: INFO: Updating stateful set ss2 Mar 12 19:20:58.204: INFO: Waiting for Pod statefulset-9956/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 12 19:21:08.302: INFO: Found 2 stateful pods, waiting for 3 Mar 12 19:21:18.306: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 19:21:18.306: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 19:21:18.306: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 12 19:21:18.325: INFO: Updating stateful set ss2 Mar 12 19:21:18.342: INFO: Waiting for Pod statefulset-9956/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 12 19:21:28.371: INFO: Updating stateful set ss2 Mar 12 19:21:28.398: INFO: Waiting for StatefulSet statefulset-9956/ss2 to complete update Mar 12 19:21:28.398: INFO: Waiting for Pod statefulset-9956/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 19:21:38.404: INFO: Deleting all statefulset in ns statefulset-9956 Mar 12 19:21:38.407: INFO: Scaling statefulset ss2 to 0 Mar 12 19:21:58.419: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 19:21:58.421: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:21:58.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9956" for this suite. • [SLOW TEST:80.432 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":87,"skipped":1297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:21:58.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 12 19:21:58.500: INFO: Waiting up to 5m0s for pod "pod-bdb4b44b-0a1e-42b4-b37e-0867f6e64d27" in namespace "emptydir-1148" to be "success or failure" Mar 12 19:21:58.504: INFO: Pod "pod-bdb4b44b-0a1e-42b4-b37e-0867f6e64d27": Phase="Pending", Reason="", readiness=false. Elapsed: 3.722551ms Mar 12 19:22:00.509: INFO: Pod "pod-bdb4b44b-0a1e-42b4-b37e-0867f6e64d27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008690322s STEP: Saw pod success Mar 12 19:22:00.509: INFO: Pod "pod-bdb4b44b-0a1e-42b4-b37e-0867f6e64d27" satisfied condition "success or failure" Mar 12 19:22:00.512: INFO: Trying to get logs from node jerma-worker pod pod-bdb4b44b-0a1e-42b4-b37e-0867f6e64d27 container test-container: STEP: delete the pod Mar 12 19:22:00.547: INFO: Waiting for pod pod-bdb4b44b-0a1e-42b4-b37e-0867f6e64d27 to disappear Mar 12 19:22:00.552: INFO: Pod pod-bdb4b44b-0a1e-42b4-b37e-0867f6e64d27 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:22:00.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1148" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1332,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:22:00.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-7127b625-56a1-4b85-bac9-dc3a31b47a9c STEP: Creating a pod to test consume configMaps Mar 12 19:22:00.608: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4d238188-8d0f-4b0f-a49c-cc834c4a0aec" in namespace "projected-9002" to be "success or failure" Mar 12 19:22:00.640: INFO: Pod "pod-projected-configmaps-4d238188-8d0f-4b0f-a49c-cc834c4a0aec": Phase="Pending", Reason="", readiness=false. Elapsed: 32.380561ms Mar 12 19:22:02.644: INFO: Pod "pod-projected-configmaps-4d238188-8d0f-4b0f-a49c-cc834c4a0aec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.035819349s STEP: Saw pod success Mar 12 19:22:02.644: INFO: Pod "pod-projected-configmaps-4d238188-8d0f-4b0f-a49c-cc834c4a0aec" satisfied condition "success or failure" Mar 12 19:22:02.646: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-4d238188-8d0f-4b0f-a49c-cc834c4a0aec container projected-configmap-volume-test: STEP: delete the pod Mar 12 19:22:02.677: INFO: Waiting for pod pod-projected-configmaps-4d238188-8d0f-4b0f-a49c-cc834c4a0aec to disappear Mar 12 19:22:02.684: INFO: Pod pod-projected-configmaps-4d238188-8d0f-4b0f-a49c-cc834c4a0aec no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:22:02.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9002" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1345,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:22:02.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 12 19:22:10.826: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1245 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:22:10.826: INFO: >>> kubeConfig: /root/.kube/config I0312 19:22:10.858813 6 log.go:172] (0xc001c9a420) (0xc002255cc0) Create stream I0312 19:22:10.858842 6 log.go:172] (0xc001c9a420) (0xc002255cc0) Stream added, broadcasting: 1 I0312 19:22:10.860992 6 log.go:172] (0xc001c9a420) Reply frame received for 1 I0312 19:22:10.861023 6 log.go:172] (0xc001c9a420) (0xc001f705a0) Create stream I0312 19:22:10.861035 6 log.go:172] (0xc001c9a420) (0xc001f705a0) Stream added, broadcasting: 3 I0312 19:22:10.862016 6 log.go:172] (0xc001c9a420) Reply frame received for 3 I0312 19:22:10.862048 6 log.go:172] (0xc001c9a420) (0xc001f70640) Create stream I0312 19:22:10.862063 6 log.go:172] (0xc001c9a420) (0xc001f70640) Stream added, broadcasting: 5 I0312 19:22:10.863384 6 log.go:172] (0xc001c9a420) Reply frame received for 5 I0312 19:22:10.921491 6 log.go:172] (0xc001c9a420) Data frame received for 3 I0312 19:22:10.921524 6 log.go:172] (0xc001f705a0) (3) Data frame handling I0312 19:22:10.921534 6 log.go:172] (0xc001f705a0) (3) Data frame sent I0312 19:22:10.921544 6 log.go:172] (0xc001c9a420) Data frame received for 3 I0312 19:22:10.921553 6 log.go:172] (0xc001f705a0) (3) Data frame handling I0312 19:22:10.921575 6 log.go:172] (0xc001c9a420) Data frame received for 5 I0312 19:22:10.921586 6 log.go:172] (0xc001f70640) (5) Data frame handling I0312 19:22:10.922968 6 log.go:172] (0xc001c9a420) Data frame received for 1 I0312 19:22:10.923015 6 log.go:172] (0xc002255cc0) (1) Data frame handling I0312 19:22:10.923061 6 log.go:172] (0xc002255cc0) (1) Data frame sent I0312 19:22:10.923142 6 log.go:172] (0xc001c9a420) (0xc002255cc0) Stream removed, broadcasting: 1 I0312 19:22:10.923173 6 log.go:172] (0xc001c9a420) Go away received I0312 19:22:10.923250 6 log.go:172] (0xc001c9a420) (0xc002255cc0) Stream removed, broadcasting: 1 I0312 19:22:10.923273 6 log.go:172] (0xc001c9a420) (0xc001f705a0) Stream removed, broadcasting: 3 I0312 19:22:10.923286 6 log.go:172] (0xc001c9a420) (0xc001f70640) Stream removed, broadcasting: 5 Mar 12 19:22:10.923: INFO: Exec stderr: "" Mar 12 19:22:10.923: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1245 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:22:10.923: INFO: >>> kubeConfig: /root/.kube/config I0312 19:22:10.965996 6 log.go:172] (0xc001c9aa50) (0xc002255ea0) Create stream I0312 19:22:10.966026 6 log.go:172] (0xc001c9aa50) (0xc002255ea0) Stream added, broadcasting: 1 I0312 19:22:10.967985 6 log.go:172] (0xc001c9aa50) Reply frame received for 1 I0312 19:22:10.968016 6 log.go:172] (0xc001c9aa50) (0xc002255f40) Create stream I0312 19:22:10.968027 6 log.go:172] (0xc001c9aa50) (0xc002255f40) Stream added, broadcasting: 3 I0312 19:22:10.968808 6 log.go:172] (0xc001c9aa50) Reply frame received for 3 I0312 19:22:10.968847 6 log.go:172] (0xc001c9aa50) (0xc0015b6000) Create stream I0312 19:22:10.968858 6 log.go:172] (0xc001c9aa50) (0xc0015b6000) Stream added, broadcasting: 5 I0312 19:22:10.969757 6 log.go:172] (0xc001c9aa50) Reply frame received for 5 I0312 19:22:11.037123 6 log.go:172] (0xc001c9aa50) Data frame received for 5 I0312 19:22:11.037154 6 log.go:172] (0xc0015b6000) (5) Data frame handling I0312 19:22:11.037178 6 log.go:172] (0xc001c9aa50) Data frame received for 3 I0312 19:22:11.037188 6 log.go:172] (0xc002255f40) (3) Data frame handling I0312 19:22:11.037200 6 log.go:172] (0xc002255f40) (3) Data frame sent I0312 19:22:11.037214 6 log.go:172] (0xc001c9aa50) Data frame received for 3 I0312 19:22:11.037228 6 log.go:172] (0xc002255f40) (3) Data frame handling I0312 19:22:11.038612 6 log.go:172] (0xc001c9aa50) Data frame received for 1 I0312 19:22:11.038638 6 log.go:172] (0xc002255ea0) (1) Data frame handling I0312 19:22:11.038653 6 log.go:172] (0xc002255ea0) (1) Data frame sent I0312 19:22:11.038672 6 log.go:172] (0xc001c9aa50) (0xc002255ea0) Stream removed, broadcasting: 1 I0312 19:22:11.038747 6 log.go:172] (0xc001c9aa50) (0xc002255ea0) Stream removed, broadcasting: 1 I0312 19:22:11.038760 6 log.go:172] (0xc001c9aa50) (0xc002255f40) Stream removed, broadcasting: 3 I0312 19:22:11.038769 6 log.go:172] (0xc001c9aa50) (0xc0015b6000) Stream removed, broadcasting: 5 Mar 12 19:22:11.038: INFO: Exec stderr: "" Mar 12 19:22:11.038: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1245 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:22:11.038: INFO: >>> kubeConfig: /root/.kube/config I0312 19:22:11.040518 6 log.go:172] (0xc001c9aa50) Go away received I0312 19:22:11.090713 6 log.go:172] (0xc001c9adc0) (0xc0015b63c0) Create stream I0312 19:22:11.090738 6 log.go:172] (0xc001c9adc0) (0xc0015b63c0) Stream added, broadcasting: 1 I0312 19:22:11.092865 6 log.go:172] (0xc001c9adc0) Reply frame received for 1 I0312 19:22:11.092922 6 log.go:172] (0xc001c9adc0) (0xc0023cabe0) Create stream I0312 19:22:11.092936 6 log.go:172] (0xc001c9adc0) (0xc0023cabe0) Stream added, broadcasting: 3 I0312 19:22:11.093864 6 log.go:172] (0xc001c9adc0) Reply frame received for 3 I0312 19:22:11.093893 6 log.go:172] (0xc001c9adc0) (0xc0023cac80) Create stream I0312 19:22:11.093901 6 log.go:172] (0xc001c9adc0) (0xc0023cac80) Stream added, broadcasting: 5 I0312 19:22:11.094774 6 log.go:172] (0xc001c9adc0) Reply frame received for 5 I0312 19:22:11.144285 6 log.go:172] (0xc001c9adc0) Data frame received for 3 I0312 19:22:11.144308 6 log.go:172] (0xc0023cabe0) (3) Data frame handling I0312 19:22:11.144323 6 log.go:172] (0xc0023cabe0) (3) Data frame sent I0312 19:22:11.144448 6 log.go:172] (0xc001c9adc0) Data frame received for 5 I0312 19:22:11.144468 6 log.go:172] (0xc0023cac80) (5) Data frame handling I0312 19:22:11.144491 6 log.go:172] (0xc001c9adc0) Data frame received for 3 I0312 19:22:11.144509 6 log.go:172] (0xc0023cabe0) (3) Data frame handling I0312 19:22:11.145467 6 log.go:172] (0xc001c9adc0) Data frame received for 1 I0312 19:22:11.145493 6 log.go:172] (0xc0015b63c0) (1) Data frame handling I0312 19:22:11.145512 6 log.go:172] (0xc0015b63c0) (1) Data frame sent I0312 19:22:11.145529 6 log.go:172] (0xc001c9adc0) (0xc0015b63c0) Stream removed, broadcasting: 1 I0312 19:22:11.145556 6 log.go:172] (0xc001c9adc0) Go away received I0312 19:22:11.145706 6 log.go:172] (0xc001c9adc0) (0xc0015b63c0) Stream removed, broadcasting: 1 I0312 19:22:11.145724 6 log.go:172] (0xc001c9adc0) (0xc0023cabe0) Stream removed, broadcasting: 3 I0312 19:22:11.145733 6 log.go:172] (0xc001c9adc0) (0xc0023cac80) Stream removed, broadcasting: 5 Mar 12 19:22:11.145: INFO: Exec stderr: "" Mar 12 19:22:11.145: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1245 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:22:11.145: INFO: >>> kubeConfig: /root/.kube/config I0312 19:22:11.169945 6 log.go:172] (0xc0024febb0) (0xc001f708c0) Create stream I0312 19:22:11.169970 6 log.go:172] (0xc0024febb0) (0xc001f708c0) Stream added, broadcasting: 1 I0312 19:22:11.172146 6 log.go:172] (0xc0024febb0) Reply frame received for 1 I0312 19:22:11.172192 6 log.go:172] (0xc0024febb0) (0xc001f70960) Create stream I0312 19:22:11.172208 6 log.go:172] (0xc0024febb0) (0xc001f70960) Stream added, broadcasting: 3 I0312 19:22:11.173064 6 log.go:172] (0xc0024febb0) Reply frame received for 3 I0312 19:22:11.173093 6 log.go:172] (0xc0024febb0) (0xc0015b6500) Create stream I0312 19:22:11.173101 6 log.go:172] (0xc0024febb0) (0xc0015b6500) Stream added, broadcasting: 5 I0312 19:22:11.173867 6 log.go:172] (0xc0024febb0) Reply frame received for 5 I0312 19:22:11.227692 6 log.go:172] (0xc0024febb0) Data frame received for 5 I0312 19:22:11.227725 6 log.go:172] (0xc0015b6500) (5) Data frame handling I0312 19:22:11.227748 6 log.go:172] (0xc0024febb0) Data frame received for 3 I0312 19:22:11.227761 6 log.go:172] (0xc001f70960) (3) Data frame handling I0312 19:22:11.227775 6 log.go:172] (0xc001f70960) (3) Data frame sent I0312 19:22:11.227787 6 log.go:172] (0xc0024febb0) Data frame received for 3 I0312 19:22:11.227797 6 log.go:172] (0xc001f70960) (3) Data frame handling I0312 19:22:11.228744 6 log.go:172] (0xc0024febb0) Data frame received for 1 I0312 19:22:11.228766 6 log.go:172] (0xc001f708c0) (1) Data frame handling I0312 19:22:11.228777 6 log.go:172] (0xc001f708c0) (1) Data frame sent I0312 19:22:11.228793 6 log.go:172] (0xc0024febb0) (0xc001f708c0) Stream removed, broadcasting: 1 I0312 19:22:11.228815 6 log.go:172] (0xc0024febb0) Go away received I0312 19:22:11.228987 6 log.go:172] (0xc0024febb0) (0xc001f708c0) Stream removed, broadcasting: 1 I0312 19:22:11.229010 6 log.go:172] (0xc0024febb0) (0xc001f70960) Stream removed, broadcasting: 3 I0312 19:22:11.229024 6 log.go:172] (0xc0024febb0) (0xc0015b6500) Stream removed, broadcasting: 5 Mar 12 19:22:11.229: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 12 19:22:11.229: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1245 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:22:11.229: INFO: >>> kubeConfig: /root/.kube/config I0312 19:22:11.250841 6 log.go:172] (0xc0024ff1e0) (0xc001f70e60) Create stream I0312 19:22:11.250870 6 log.go:172] (0xc0024ff1e0) (0xc001f70e60) Stream added, broadcasting: 1 I0312 19:22:11.252840 6 log.go:172] (0xc0024ff1e0) Reply frame received for 1 I0312 19:22:11.252871 6 log.go:172] (0xc0024ff1e0) (0xc001f70f00) Create stream I0312 19:22:11.252883 6 log.go:172] (0xc0024ff1e0) (0xc001f70f00) Stream added, broadcasting: 3 I0312 19:22:11.253633 6 log.go:172] (0xc0024ff1e0) Reply frame received for 3 I0312 19:22:11.253655 6 log.go:172] (0xc0024ff1e0) (0xc001f70fa0) Create stream I0312 19:22:11.253661 6 log.go:172] (0xc0024ff1e0) (0xc001f70fa0) Stream added, broadcasting: 5 I0312 19:22:11.254624 6 log.go:172] (0xc0024ff1e0) Reply frame received for 5 I0312 19:22:11.342499 6 log.go:172] (0xc0024ff1e0) Data frame received for 3 I0312 19:22:11.342525 6 log.go:172] (0xc001f70f00) (3) Data frame handling I0312 19:22:11.342531 6 log.go:172] (0xc001f70f00) (3) Data frame sent I0312 19:22:11.342539 6 log.go:172] (0xc0024ff1e0) Data frame received for 3 I0312 19:22:11.342544 6 log.go:172] (0xc001f70f00) (3) Data frame handling I0312 19:22:11.342558 6 log.go:172] (0xc0024ff1e0) Data frame received for 5 I0312 19:22:11.342563 6 log.go:172] (0xc001f70fa0) (5) Data frame handling I0312 19:22:11.343260 6 log.go:172] (0xc0024ff1e0) Data frame received for 1 I0312 19:22:11.343279 6 log.go:172] (0xc001f70e60) (1) Data frame handling I0312 19:22:11.343286 6 log.go:172] (0xc001f70e60) (1) Data frame sent I0312 19:22:11.343292 6 log.go:172] (0xc0024ff1e0) (0xc001f70e60) Stream removed, broadcasting: 1 I0312 19:22:11.343299 6 log.go:172] (0xc0024ff1e0) Go away received I0312 19:22:11.343411 6 log.go:172] (0xc0024ff1e0) (0xc001f70e60) Stream removed, broadcasting: 1 I0312 19:22:11.343423 6 log.go:172] (0xc0024ff1e0) (0xc001f70f00) Stream removed, broadcasting: 3 I0312 19:22:11.343429 6 log.go:172] (0xc0024ff1e0) (0xc001f70fa0) Stream removed, broadcasting: 5 Mar 12 19:22:11.343: INFO: Exec stderr: "" Mar 12 19:22:11.343: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1245 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:22:11.343: INFO: >>> kubeConfig: /root/.kube/config I0312 19:22:11.359874 6 log.go:172] (0xc002c500b0) (0xc0023cb680) Create stream I0312 19:22:11.359891 6 log.go:172] (0xc002c500b0) (0xc0023cb680) Stream added, broadcasting: 1 I0312 19:22:11.361238 6 log.go:172] (0xc002c500b0) Reply frame received for 1 I0312 19:22:11.361263 6 log.go:172] (0xc002c500b0) (0xc001436500) Create stream I0312 19:22:11.361271 6 log.go:172] (0xc002c500b0) (0xc001436500) Stream added, broadcasting: 3 I0312 19:22:11.361984 6 log.go:172] (0xc002c500b0) Reply frame received for 3 I0312 19:22:11.362010 6 log.go:172] (0xc002c500b0) (0xc001f71040) Create stream I0312 19:22:11.362021 6 log.go:172] (0xc002c500b0) (0xc001f71040) Stream added, broadcasting: 5 I0312 19:22:11.362659 6 log.go:172] (0xc002c500b0) Reply frame received for 5 I0312 19:22:11.422180 6 log.go:172] (0xc002c500b0) Data frame received for 5 I0312 19:22:11.422199 6 log.go:172] (0xc001f71040) (5) Data frame handling I0312 19:22:11.422211 6 log.go:172] (0xc002c500b0) Data frame received for 3 I0312 19:22:11.422218 6 log.go:172] (0xc001436500) (3) Data frame handling I0312 19:22:11.422224 6 log.go:172] (0xc001436500) (3) Data frame sent I0312 19:22:11.422230 6 log.go:172] (0xc002c500b0) Data frame received for 3 I0312 19:22:11.422234 6 log.go:172] (0xc001436500) (3) Data frame handling I0312 19:22:11.422790 6 log.go:172] (0xc002c500b0) Data frame received for 1 I0312 19:22:11.422807 6 log.go:172] (0xc0023cb680) (1) Data frame handling I0312 19:22:11.422814 6 log.go:172] (0xc0023cb680) (1) Data frame sent I0312 19:22:11.422823 6 log.go:172] (0xc002c500b0) (0xc0023cb680) Stream removed, broadcasting: 1 I0312 19:22:11.422834 6 log.go:172] (0xc002c500b0) Go away received I0312 19:22:11.422889 6 log.go:172] (0xc002c500b0) (0xc0023cb680) Stream removed, broadcasting: 1 I0312 19:22:11.422905 6 log.go:172] (0xc002c500b0) (0xc001436500) Stream removed, broadcasting: 3 I0312 19:22:11.422914 6 log.go:172] (0xc002c500b0) (0xc001f71040) Stream removed, broadcasting: 5 Mar 12 19:22:11.422: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 12 19:22:11.422: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1245 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:22:11.422: INFO: >>> kubeConfig: /root/.kube/config I0312 19:22:11.439036 6 log.go:172] (0xc0024ff810) (0xc001f712c0) Create stream I0312 19:22:11.439055 6 log.go:172] (0xc0024ff810) (0xc001f712c0) Stream added, broadcasting: 1 I0312 19:22:11.440153 6 log.go:172] (0xc0024ff810) Reply frame received for 1 I0312 19:22:11.440171 6 log.go:172] (0xc0024ff810) (0xc0015b6780) Create stream I0312 19:22:11.440177 6 log.go:172] (0xc0024ff810) (0xc0015b6780) Stream added, broadcasting: 3 I0312 19:22:11.440634 6 log.go:172] (0xc0024ff810) Reply frame received for 3 I0312 19:22:11.440653 6 log.go:172] (0xc0024ff810) (0xc0023cb720) Create stream I0312 19:22:11.440658 6 log.go:172] (0xc0024ff810) (0xc0023cb720) Stream added, broadcasting: 5 I0312 19:22:11.441122 6 log.go:172] (0xc0024ff810) Reply frame received for 5 I0312 19:22:11.468573 6 log.go:172] (0xc0024ff810) Data frame received for 3 I0312 19:22:11.468598 6 log.go:172] (0xc0015b6780) (3) Data frame handling I0312 19:22:11.468616 6 log.go:172] (0xc0015b6780) (3) Data frame sent I0312 19:22:11.468623 6 log.go:172] (0xc0024ff810) Data frame received for 3 I0312 19:22:11.468632 6 log.go:172] (0xc0015b6780) (3) Data frame handling I0312 19:22:11.468702 6 log.go:172] (0xc0024ff810) Data frame received for 5 I0312 19:22:11.468712 6 log.go:172] (0xc0023cb720) (5) Data frame handling I0312 19:22:11.469725 6 log.go:172] (0xc0024ff810) Data frame received for 1 I0312 19:22:11.469737 6 log.go:172] (0xc001f712c0) (1) Data frame handling I0312 19:22:11.469747 6 log.go:172] (0xc001f712c0) (1) Data frame sent I0312 19:22:11.469759 6 log.go:172] (0xc0024ff810) (0xc001f712c0) Stream removed, broadcasting: 1 I0312 19:22:11.469774 6 log.go:172] (0xc0024ff810) Go away received I0312 19:22:11.469890 6 log.go:172] (0xc0024ff810) (0xc001f712c0) Stream removed, broadcasting: 1 I0312 19:22:11.469905 6 log.go:172] (0xc0024ff810) (0xc0015b6780) Stream removed, broadcasting: 3 I0312 19:22:11.469918 6 log.go:172] (0xc0024ff810) (0xc0023cb720) Stream removed, broadcasting: 5 Mar 12 19:22:11.469: INFO: Exec stderr: "" Mar 12 19:22:11.469: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1245 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:22:11.469: INFO: >>> kubeConfig: /root/.kube/config I0312 19:22:11.486352 6 log.go:172] (0xc001c9b3f0) (0xc0015b6d20) Create stream I0312 19:22:11.486371 6 log.go:172] (0xc001c9b3f0) (0xc0015b6d20) Stream added, broadcasting: 1 I0312 19:22:11.487491 6 log.go:172] (0xc001c9b3f0) Reply frame received for 1 I0312 19:22:11.487509 6 log.go:172] (0xc001c9b3f0) (0xc001f71360) Create stream I0312 19:22:11.487516 6 log.go:172] (0xc001c9b3f0) (0xc001f71360) Stream added, broadcasting: 3 I0312 19:22:11.488012 6 log.go:172] (0xc001c9b3f0) Reply frame received for 3 I0312 19:22:11.488027 6 log.go:172] (0xc001c9b3f0) (0xc0014366e0) Create stream I0312 19:22:11.488033 6 log.go:172] (0xc001c9b3f0) (0xc0014366e0) Stream added, broadcasting: 5 I0312 19:22:11.488448 6 log.go:172] (0xc001c9b3f0) Reply frame received for 5 I0312 19:22:11.542110 6 log.go:172] (0xc001c9b3f0) Data frame received for 3 I0312 19:22:11.542203 6 log.go:172] (0xc001f71360) (3) Data frame handling I0312 19:22:11.542217 6 log.go:172] (0xc001f71360) (3) Data frame sent I0312 19:22:11.542224 6 log.go:172] (0xc001c9b3f0) Data frame received for 3 I0312 19:22:11.542229 6 log.go:172] (0xc001f71360) (3) Data frame handling I0312 19:22:11.542271 6 log.go:172] (0xc001c9b3f0) Data frame received for 5 I0312 19:22:11.542284 6 log.go:172] (0xc0014366e0) (5) Data frame handling I0312 19:22:11.542783 6 log.go:172] (0xc001c9b3f0) Data frame received for 1 I0312 19:22:11.542794 6 log.go:172] (0xc0015b6d20) (1) Data frame handling I0312 19:22:11.542806 6 log.go:172] (0xc0015b6d20) (1) Data frame sent I0312 19:22:11.542818 6 log.go:172] (0xc001c9b3f0) (0xc0015b6d20) Stream removed, broadcasting: 1 I0312 19:22:11.542835 6 log.go:172] (0xc001c9b3f0) Go away received I0312 19:22:11.542899 6 log.go:172] (0xc001c9b3f0) (0xc0015b6d20) Stream removed, broadcasting: 1 I0312 19:22:11.542910 6 log.go:172] (0xc001c9b3f0) (0xc001f71360) Stream removed, broadcasting: 3 I0312 19:22:11.542916 6 log.go:172] (0xc001c9b3f0) (0xc0014366e0) Stream removed, broadcasting: 5 Mar 12 19:22:11.542: INFO: Exec stderr: "" Mar 12 19:22:11.542: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1245 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:22:11.542: INFO: >>> kubeConfig: /root/.kube/config I0312 19:22:11.559782 6 log.go:172] (0xc002c506e0) (0xc0023cb9a0) Create stream I0312 19:22:11.559801 6 log.go:172] (0xc002c506e0) (0xc0023cb9a0) Stream added, broadcasting: 1 I0312 19:22:11.561016 6 log.go:172] (0xc002c506e0) Reply frame received for 1 I0312 19:22:11.561034 6 log.go:172] (0xc002c506e0) (0xc0015b6e60) Create stream I0312 19:22:11.561040 6 log.go:172] (0xc002c506e0) (0xc0015b6e60) Stream added, broadcasting: 3 I0312 19:22:11.561458 6 log.go:172] (0xc002c506e0) Reply frame received for 3 I0312 19:22:11.561476 6 log.go:172] (0xc002c506e0) (0xc0014368c0) Create stream I0312 19:22:11.561482 6 log.go:172] (0xc002c506e0) (0xc0014368c0) Stream added, broadcasting: 5 I0312 19:22:11.561976 6 log.go:172] (0xc002c506e0) Reply frame received for 5 I0312 19:22:11.627398 6 log.go:172] (0xc002c506e0) Data frame received for 3 I0312 19:22:11.627419 6 log.go:172] (0xc0015b6e60) (3) Data frame handling I0312 19:22:11.627434 6 log.go:172] (0xc0015b6e60) (3) Data frame sent I0312 19:22:11.627541 6 log.go:172] (0xc002c506e0) Data frame received for 3 I0312 19:22:11.627561 6 log.go:172] (0xc0015b6e60) (3) Data frame handling I0312 19:22:11.627578 6 log.go:172] (0xc002c506e0) Data frame received for 5 I0312 19:22:11.627592 6 log.go:172] (0xc0014368c0) (5) Data frame handling I0312 19:22:11.628351 6 log.go:172] (0xc002c506e0) Data frame received for 1 I0312 19:22:11.628374 6 log.go:172] (0xc0023cb9a0) (1) Data frame handling I0312 19:22:11.628392 6 log.go:172] (0xc0023cb9a0) (1) Data frame sent I0312 19:22:11.628408 6 log.go:172] (0xc002c506e0) (0xc0023cb9a0) Stream removed, broadcasting: 1 I0312 19:22:11.628492 6 log.go:172] (0xc002c506e0) (0xc0023cb9a0) Stream removed, broadcasting: 1 I0312 19:22:11.628511 6 log.go:172] (0xc002c506e0) (0xc0015b6e60) Stream removed, broadcasting: 3 I0312 19:22:11.628529 6 log.go:172] (0xc002c506e0) Go away received I0312 19:22:11.628555 6 log.go:172] (0xc002c506e0) (0xc0014368c0) Stream removed, broadcasting: 5 Mar 12 19:22:11.628: INFO: Exec stderr: "" Mar 12 19:22:11.628: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1245 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:22:11.628: INFO: >>> kubeConfig: /root/.kube/config I0312 19:22:11.647699 6 log.go:172] (0xc00286bd90) (0xc0021da320) Create stream I0312 19:22:11.647715 6 log.go:172] (0xc00286bd90) (0xc0021da320) Stream added, broadcasting: 1 I0312 19:22:11.651218 6 log.go:172] (0xc00286bd90) Reply frame received for 1 I0312 19:22:11.651254 6 log.go:172] (0xc00286bd90) (0xc001f71400) Create stream I0312 19:22:11.651268 6 log.go:172] (0xc00286bd90) (0xc001f71400) Stream added, broadcasting: 3 I0312 19:22:11.654364 6 log.go:172] (0xc00286bd90) Reply frame received for 3 I0312 19:22:11.654388 6 log.go:172] (0xc00286bd90) (0xc001f714a0) Create stream I0312 19:22:11.654398 6 log.go:172] (0xc00286bd90) (0xc001f714a0) Stream added, broadcasting: 5 I0312 19:22:11.654975 6 log.go:172] (0xc00286bd90) Reply frame received for 5 I0312 19:22:11.719783 6 log.go:172] (0xc00286bd90) Data frame received for 3 I0312 19:22:11.719816 6 log.go:172] (0xc001f71400) (3) Data frame handling I0312 19:22:11.719836 6 log.go:172] (0xc001f71400) (3) Data frame sent I0312 19:22:11.719991 6 log.go:172] (0xc00286bd90) Data frame received for 5 I0312 19:22:11.720017 6 log.go:172] (0xc001f714a0) (5) Data frame handling I0312 19:22:11.720043 6 log.go:172] (0xc00286bd90) Data frame received for 3 I0312 19:22:11.720048 6 log.go:172] (0xc001f71400) (3) Data frame handling I0312 19:22:11.720751 6 log.go:172] (0xc00286bd90) Data frame received for 1 I0312 19:22:11.720765 6 log.go:172] (0xc0021da320) (1) Data frame handling I0312 19:22:11.720774 6 log.go:172] (0xc0021da320) (1) Data frame sent I0312 19:22:11.720789 6 log.go:172] (0xc00286bd90) (0xc0021da320) Stream removed, broadcasting: 1 I0312 19:22:11.720807 6 log.go:172] (0xc00286bd90) Go away received I0312 19:22:11.720859 6 log.go:172] (0xc00286bd90) (0xc0021da320) Stream removed, broadcasting: 1 I0312 19:22:11.720875 6 log.go:172] (0xc00286bd90) (0xc001f71400) Stream removed, broadcasting: 3 I0312 19:22:11.720884 6 log.go:172] (0xc00286bd90) (0xc001f714a0) Stream removed, broadcasting: 5 Mar 12 19:22:11.720: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:22:11.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1245" for this suite. • [SLOW TEST:9.034 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:22:11.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-9d3ab636-1d03-45a1-9cad-e14b56a1e64c STEP: Creating a pod to test consume secrets Mar 12 19:22:11.816: INFO: Waiting up to 5m0s for pod "pod-secrets-670925ee-2f48-4662-b969-0b058fbc9636" in namespace "secrets-5507" to be "success or failure" Mar 12 19:22:11.846: INFO: Pod "pod-secrets-670925ee-2f48-4662-b969-0b058fbc9636": Phase="Pending", Reason="", readiness=false. Elapsed: 29.819668ms Mar 12 19:22:13.850: INFO: Pod "pod-secrets-670925ee-2f48-4662-b969-0b058fbc9636": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033390934s Mar 12 19:22:15.853: INFO: Pod "pod-secrets-670925ee-2f48-4662-b969-0b058fbc9636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037370237s STEP: Saw pod success Mar 12 19:22:15.854: INFO: Pod "pod-secrets-670925ee-2f48-4662-b969-0b058fbc9636" satisfied condition "success or failure" Mar 12 19:22:15.857: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-670925ee-2f48-4662-b969-0b058fbc9636 container secret-volume-test: STEP: delete the pod Mar 12 19:22:15.906: INFO: Waiting for pod pod-secrets-670925ee-2f48-4662-b969-0b058fbc9636 to disappear Mar 12 19:22:15.918: INFO: Pod pod-secrets-670925ee-2f48-4662-b969-0b058fbc9636 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:22:15.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5507" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1416,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:22:15.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:22:18.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4641" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1419,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:22:18.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:22:18.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5896" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":93,"skipped":1421,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:22:18.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0312 19:22:24.193104 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 19:22:24.193: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:22:24.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6031" for this suite. • [SLOW TEST:6.079 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":94,"skipped":1421,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:22:24.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-fb03bd73-d531-49fd-a0ed-577928272409 in namespace container-probe-2503 Mar 12 19:22:26.251: INFO: Started pod busybox-fb03bd73-d531-49fd-a0ed-577928272409 in namespace container-probe-2503 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 19:22:26.253: INFO: Initial restart count of pod busybox-fb03bd73-d531-49fd-a0ed-577928272409 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:26:26.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2503" for this suite. • [SLOW TEST:242.798 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:26:26.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:26:27.044: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 12 19:26:29.121: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:26:30.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1664" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":96,"skipped":1527,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:26:30.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:26:30.207: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa76158e-beb0-4f1b-b764-8005d5011def" in namespace "projected-8008" to be "success or failure" Mar 12 19:26:30.217: INFO: Pod "downwardapi-volume-fa76158e-beb0-4f1b-b764-8005d5011def": Phase="Pending", Reason="", readiness=false. Elapsed: 9.68023ms Mar 12 19:26:32.223: INFO: Pod "downwardapi-volume-fa76158e-beb0-4f1b-b764-8005d5011def": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015502824s Mar 12 19:26:34.226: INFO: Pod "downwardapi-volume-fa76158e-beb0-4f1b-b764-8005d5011def": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018655721s STEP: Saw pod success Mar 12 19:26:34.226: INFO: Pod "downwardapi-volume-fa76158e-beb0-4f1b-b764-8005d5011def" satisfied condition "success or failure" Mar 12 19:26:34.228: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-fa76158e-beb0-4f1b-b764-8005d5011def container client-container: STEP: delete the pod Mar 12 19:26:34.254: INFO: Waiting for pod downwardapi-volume-fa76158e-beb0-4f1b-b764-8005d5011def to disappear Mar 12 19:26:34.272: INFO: Pod downwardapi-volume-fa76158e-beb0-4f1b-b764-8005d5011def no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:26:34.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8008" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1529,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:26:34.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 12 19:26:34.346: INFO: Waiting up to 5m0s for pod "pod-9e964812-cc0e-4f5c-b1c2-868ab0e7a16b" in namespace "emptydir-5634" to be "success or failure" Mar 12 19:26:34.355: INFO: Pod "pod-9e964812-cc0e-4f5c-b1c2-868ab0e7a16b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.86765ms Mar 12 19:26:36.357: INFO: Pod "pod-9e964812-cc0e-4f5c-b1c2-868ab0e7a16b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011513483s Mar 12 19:26:38.360: INFO: Pod "pod-9e964812-cc0e-4f5c-b1c2-868ab0e7a16b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014359793s STEP: Saw pod success Mar 12 19:26:38.360: INFO: Pod "pod-9e964812-cc0e-4f5c-b1c2-868ab0e7a16b" satisfied condition "success or failure" Mar 12 19:26:38.362: INFO: Trying to get logs from node jerma-worker pod pod-9e964812-cc0e-4f5c-b1c2-868ab0e7a16b container test-container: STEP: delete the pod Mar 12 19:26:38.400: INFO: Waiting for pod pod-9e964812-cc0e-4f5c-b1c2-868ab0e7a16b to disappear Mar 12 19:26:38.413: INFO: Pod pod-9e964812-cc0e-4f5c-b1c2-868ab0e7a16b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:26:38.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5634" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:26:38.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:26:38.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afb81d2e-4ba9-4856-b0f1-fbffa1ae751a" in namespace "projected-4044" to be "success or failure" Mar 12 19:26:38.493: INFO: Pod "downwardapi-volume-afb81d2e-4ba9-4856-b0f1-fbffa1ae751a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.155372ms Mar 12 19:26:40.496: INFO: Pod "downwardapi-volume-afb81d2e-4ba9-4856-b0f1-fbffa1ae751a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03338441s Mar 12 19:26:42.500: INFO: Pod "downwardapi-volume-afb81d2e-4ba9-4856-b0f1-fbffa1ae751a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037196542s STEP: Saw pod success Mar 12 19:26:42.500: INFO: Pod "downwardapi-volume-afb81d2e-4ba9-4856-b0f1-fbffa1ae751a" satisfied condition "success or failure" Mar 12 19:26:42.503: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-afb81d2e-4ba9-4856-b0f1-fbffa1ae751a container client-container: STEP: delete the pod Mar 12 19:26:42.536: INFO: Waiting for pod downwardapi-volume-afb81d2e-4ba9-4856-b0f1-fbffa1ae751a to disappear Mar 12 19:26:42.539: INFO: Pod downwardapi-volume-afb81d2e-4ba9-4856-b0f1-fbffa1ae751a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:26:42.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4044" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1556,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:26:42.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 12 19:26:42.583: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 19:26:42.589: INFO: Waiting for terminating namespaces to be deleted... Mar 12 19:26:42.590: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 12 19:26:42.593: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:26:42.593: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 19:26:42.593: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:26:42.593: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 19:26:42.593: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 12 19:26:42.603: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:26:42.603: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 19:26:42.603: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:26:42.603: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 12 19:26:42.659: INFO: Pod kindnet-gxwrl requesting resource cpu=100m on Node jerma-worker Mar 12 19:26:42.659: INFO: Pod kindnet-x9bds requesting resource cpu=100m on Node jerma-worker2 Mar 12 19:26:42.659: INFO: Pod kube-proxy-dvgp7 requesting resource cpu=0m on Node jerma-worker Mar 12 19:26:42.659: INFO: Pod kube-proxy-xqsww requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 12 19:26:42.659: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 12 19:26:42.662: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-8e730986-fa49-49ed-9596-062511488883.15fba514cda3f11f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9880/filler-pod-8e730986-fa49-49ed-9596-062511488883 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e730986-fa49-49ed-9596-062511488883.15fba514f7d6e825], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e730986-fa49-49ed-9596-062511488883.15fba51507616c21], Reason = [Created], Message = [Created container filler-pod-8e730986-fa49-49ed-9596-062511488883] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e730986-fa49-49ed-9596-062511488883.15fba51511812cac], Reason = [Started], Message = [Started container filler-pod-8e730986-fa49-49ed-9596-062511488883] STEP: Considering event: Type = [Normal], Name = [filler-pod-a1c4ade2-fc36-4f64-9f1f-84d9c3d70da8.15fba514cd063e94], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9880/filler-pod-a1c4ade2-fc36-4f64-9f1f-84d9c3d70da8 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-a1c4ade2-fc36-4f64-9f1f-84d9c3d70da8.15fba514f8c8f0d2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a1c4ade2-fc36-4f64-9f1f-84d9c3d70da8.15fba51507bccbc4], Reason = [Created], Message = [Created container filler-pod-a1c4ade2-fc36-4f64-9f1f-84d9c3d70da8] STEP: Considering event: Type = [Normal], Name = [filler-pod-a1c4ade2-fc36-4f64-9f1f-84d9c3d70da8.15fba51513f2a810], Reason = [Started], Message = [Started container filler-pod-a1c4ade2-fc36-4f64-9f1f-84d9c3d70da8] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fba515453c226d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fba515488d1e1b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:26:45.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9880" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":100,"skipped":1556,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:26:45.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 12 19:26:48.367: INFO: Successfully updated pod "pod-update-712c999d-3117-4898-b86c-1beef3c2accd" STEP: verifying the updated pod is in kubernetes Mar 12 19:26:48.388: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:26:48.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7127" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1562,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:26:48.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:26:48.849: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:26:51.924: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:26:52.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2455" for this suite. STEP: Destroying namespace "webhook-2455-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":102,"skipped":1575,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:26:52.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 12 19:26:52.663: INFO: created pod pod-service-account-defaultsa Mar 12 19:26:52.663: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 12 19:26:52.667: INFO: created pod pod-service-account-mountsa Mar 12 19:26:52.667: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 12 19:26:52.703: INFO: created pod pod-service-account-nomountsa Mar 12 19:26:52.703: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 12 19:26:52.710: INFO: created pod pod-service-account-defaultsa-mountspec Mar 12 19:26:52.710: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 12 19:26:52.727: INFO: created pod pod-service-account-mountsa-mountspec Mar 12 19:26:52.727: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 12 19:26:52.748: INFO: created pod pod-service-account-nomountsa-mountspec Mar 12 19:26:52.748: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 12 19:26:52.757: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 12 19:26:52.757: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 12 19:26:52.797: INFO: created pod pod-service-account-mountsa-nomountspec Mar 12 19:26:52.797: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 12 19:26:52.835: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 12 19:26:52.835: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:26:52.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-501" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":103,"skipped":1582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:26:52.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9570.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9570.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9570.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9570.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9570.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9570.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 19:26:57.183: INFO: DNS probes using dns-9570/dns-test-4f3a6f45-a92a-4f08-b441-a6b0304e2e71 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:26:57.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9570" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":104,"skipped":1611,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:26:57.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-9a219e92-704d-4041-82dc-756c8bfafb29 STEP: Creating a pod to test consume configMaps Mar 12 19:26:57.423: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6b4ca6e-c8db-4656-9559-5cd1af6922ba" in namespace "configmap-3424" to be "success or failure" Mar 12 19:26:57.470: INFO: Pod "pod-configmaps-b6b4ca6e-c8db-4656-9559-5cd1af6922ba": Phase="Pending", Reason="", readiness=false. Elapsed: 46.2639ms Mar 12 19:26:59.473: INFO: Pod "pod-configmaps-b6b4ca6e-c8db-4656-9559-5cd1af6922ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04980662s Mar 12 19:27:01.477: INFO: Pod "pod-configmaps-b6b4ca6e-c8db-4656-9559-5cd1af6922ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053215636s STEP: Saw pod success Mar 12 19:27:01.477: INFO: Pod "pod-configmaps-b6b4ca6e-c8db-4656-9559-5cd1af6922ba" satisfied condition "success or failure" Mar 12 19:27:01.479: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b6b4ca6e-c8db-4656-9559-5cd1af6922ba container configmap-volume-test: STEP: delete the pod Mar 12 19:27:01.520: INFO: Waiting for pod pod-configmaps-b6b4ca6e-c8db-4656-9559-5cd1af6922ba to disappear Mar 12 19:27:01.528: INFO: Pod pod-configmaps-b6b4ca6e-c8db-4656-9559-5cd1af6922ba no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:27:01.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3424" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1623,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:27:01.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:27:01.629: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 12 19:27:04.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8720 create -f -' Mar 12 19:27:06.645: INFO: stderr: "" Mar 12 19:27:06.645: INFO: stdout: "e2e-test-crd-publish-openapi-4290-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 12 19:27:06.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8720 delete e2e-test-crd-publish-openapi-4290-crds test-cr' Mar 12 19:27:06.748: INFO: stderr: "" Mar 12 19:27:06.748: INFO: stdout: "e2e-test-crd-publish-openapi-4290-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 12 19:27:06.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8720 apply -f -' Mar 12 19:27:07.014: INFO: stderr: "" Mar 12 19:27:07.014: INFO: stdout: "e2e-test-crd-publish-openapi-4290-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 12 19:27:07.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8720 delete e2e-test-crd-publish-openapi-4290-crds test-cr' Mar 12 19:27:07.100: INFO: stderr: "" Mar 12 19:27:07.100: INFO: stdout: "e2e-test-crd-publish-openapi-4290-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 12 19:27:07.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4290-crds' Mar 12 19:27:07.327: INFO: stderr: "" Mar 12 19:27:07.327: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4290-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:27:10.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8720" for this suite. • [SLOW TEST:8.580 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":106,"skipped":1623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:27:10.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-l6pw STEP: Creating a pod to test atomic-volume-subpath Mar 12 19:27:10.189: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l6pw" in namespace "subpath-6826" to be "success or failure" Mar 12 19:27:10.204: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.516244ms Mar 12 19:27:12.207: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 2.018358974s Mar 12 19:27:14.211: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 4.02218157s Mar 12 19:27:16.214: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 6.025325751s Mar 12 19:27:18.222: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 8.033036523s Mar 12 19:27:20.226: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 10.0372915s Mar 12 19:27:22.231: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 12.041624186s Mar 12 19:27:24.235: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 14.04589123s Mar 12 19:27:26.239: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 16.049889019s Mar 12 19:27:28.244: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 18.055148611s Mar 12 19:27:30.251: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 20.061994908s Mar 12 19:27:32.255: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Running", Reason="", readiness=true. Elapsed: 22.065464033s Mar 12 19:27:34.259: INFO: Pod "pod-subpath-test-configmap-l6pw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069708635s STEP: Saw pod success Mar 12 19:27:34.259: INFO: Pod "pod-subpath-test-configmap-l6pw" satisfied condition "success or failure" Mar 12 19:27:34.262: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-l6pw container test-container-subpath-configmap-l6pw: STEP: delete the pod Mar 12 19:27:34.282: INFO: Waiting for pod pod-subpath-test-configmap-l6pw to disappear Mar 12 19:27:34.307: INFO: Pod pod-subpath-test-configmap-l6pw no longer exists STEP: Deleting pod pod-subpath-test-configmap-l6pw Mar 12 19:27:34.307: INFO: Deleting pod "pod-subpath-test-configmap-l6pw" in namespace "subpath-6826" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:27:34.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6826" for this suite. • [SLOW TEST:24.203 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":107,"skipped":1673,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:27:34.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 12 19:27:34.362: INFO: Waiting up to 5m0s for pod "client-containers-3708485c-fd9d-45b9-9bf2-0df42184cf7a" in namespace "containers-6193" to be "success or failure" Mar 12 19:27:34.381: INFO: Pod "client-containers-3708485c-fd9d-45b9-9bf2-0df42184cf7a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.573276ms Mar 12 19:27:36.386: INFO: Pod "client-containers-3708485c-fd9d-45b9-9bf2-0df42184cf7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023977144s Mar 12 19:27:38.388: INFO: Pod "client-containers-3708485c-fd9d-45b9-9bf2-0df42184cf7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026398933s STEP: Saw pod success Mar 12 19:27:38.388: INFO: Pod "client-containers-3708485c-fd9d-45b9-9bf2-0df42184cf7a" satisfied condition "success or failure" Mar 12 19:27:38.389: INFO: Trying to get logs from node jerma-worker2 pod client-containers-3708485c-fd9d-45b9-9bf2-0df42184cf7a container test-container: STEP: delete the pod Mar 12 19:27:38.439: INFO: Waiting for pod client-containers-3708485c-fd9d-45b9-9bf2-0df42184cf7a to disappear Mar 12 19:27:38.445: INFO: Pod client-containers-3708485c-fd9d-45b9-9bf2-0df42184cf7a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:27:38.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6193" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1676,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:27:38.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:27:38.488: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-677cf55c-e1bf-408e-a8b4-be7c33bd504c" in namespace "security-context-test-2027" to be "success or failure" Mar 12 19:27:38.493: INFO: Pod "busybox-privileged-false-677cf55c-e1bf-408e-a8b4-be7c33bd504c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.617004ms Mar 12 19:27:40.496: INFO: Pod "busybox-privileged-false-677cf55c-e1bf-408e-a8b4-be7c33bd504c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007986137s Mar 12 19:27:42.500: INFO: Pod "busybox-privileged-false-677cf55c-e1bf-408e-a8b4-be7c33bd504c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011529736s Mar 12 19:27:42.500: INFO: Pod "busybox-privileged-false-677cf55c-e1bf-408e-a8b4-be7c33bd504c" satisfied condition "success or failure" Mar 12 19:27:42.506: INFO: Got logs for pod "busybox-privileged-false-677cf55c-e1bf-408e-a8b4-be7c33bd504c": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:27:42.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2027" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1688,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:27:42.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-a4e6b104-0ca6-4153-b506-1d8c0921ae5a STEP: Creating a pod to test consume configMaps Mar 12 19:27:42.627: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54617f2a-6f9a-493b-a05a-e49a6d6c8bb8" in namespace "projected-8165" to be "success or failure" Mar 12 19:27:42.631: INFO: Pod "pod-projected-configmaps-54617f2a-6f9a-493b-a05a-e49a6d6c8bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.878436ms Mar 12 19:27:44.634: INFO: Pod "pod-projected-configmaps-54617f2a-6f9a-493b-a05a-e49a6d6c8bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007466465s Mar 12 19:27:46.638: INFO: Pod "pod-projected-configmaps-54617f2a-6f9a-493b-a05a-e49a6d6c8bb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011244169s STEP: Saw pod success Mar 12 19:27:46.638: INFO: Pod "pod-projected-configmaps-54617f2a-6f9a-493b-a05a-e49a6d6c8bb8" satisfied condition "success or failure" Mar 12 19:27:46.641: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-54617f2a-6f9a-493b-a05a-e49a6d6c8bb8 container projected-configmap-volume-test: STEP: delete the pod Mar 12 19:27:46.692: INFO: Waiting for pod pod-projected-configmaps-54617f2a-6f9a-493b-a05a-e49a6d6c8bb8 to disappear Mar 12 19:27:46.698: INFO: Pod pod-projected-configmaps-54617f2a-6f9a-493b-a05a-e49a6d6c8bb8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:27:46.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8165" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1704,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:27:46.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:27:47.189: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:27:50.229: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:27:50.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5193" for this suite. STEP: Destroying namespace "webhook-5193-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":111,"skipped":1708,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:27:50.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0312 19:28:00.544611 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 19:28:00.544: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:28:00.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4882" for this suite. • [SLOW TEST:10.179 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":112,"skipped":1710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:28:00.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 12 19:28:00.620: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 19:28:00.631: INFO: Waiting for terminating namespaces to be deleted... Mar 12 19:28:00.633: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 12 19:28:00.637: INFO: simpletest-rc-to-be-deleted-ls8hf from gc-4882 started at 2020-03-12 19:27:50 +0000 UTC (1 container statuses recorded) Mar 12 19:28:00.637: INFO: Container nginx ready: true, restart count 0 Mar 12 19:28:00.637: INFO: simpletest-rc-to-be-deleted-dff4c from gc-4882 started at 2020-03-12 19:27:50 +0000 UTC (1 container statuses recorded) Mar 12 19:28:00.637: INFO: Container nginx ready: true, restart count 0 Mar 12 19:28:00.637: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:28:00.637: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 19:28:00.637: INFO: simpletest-rc-to-be-deleted-4kx2h from gc-4882 started at 2020-03-12 19:27:50 +0000 UTC (1 container statuses recorded) Mar 12 19:28:00.637: INFO: Container nginx ready: true, restart count 0 Mar 12 19:28:00.637: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:28:00.637: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 19:28:00.637: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 12 19:28:00.641: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:28:00.641: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 19:28:00.641: INFO: simpletest-rc-to-be-deleted-lgrp5 from gc-4882 started at 2020-03-12 19:27:50 +0000 UTC (1 container statuses recorded) Mar 12 19:28:00.641: INFO: Container nginx ready: true, restart count 0 Mar 12 19:28:00.641: INFO: simpletest-rc-to-be-deleted-m4sl8 from gc-4882 started at 2020-03-12 19:27:50 +0000 UTC (1 container statuses recorded) Mar 12 19:28:00.641: INFO: Container nginx ready: true, restart count 0 Mar 12 19:28:00.641: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:28:00.641: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5312da4d-7f3f-43b0-8469-6ec260204da6 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5312da4d-7f3f-43b0-8469-6ec260204da6 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5312da4d-7f3f-43b0-8469-6ec260204da6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:28:04.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4786" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":113,"skipped":1789,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:28:04.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:28:04.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39c86aeb-a0e9-4329-b480-94ddb04c6f80" in namespace "downward-api-8915" to be "success or failure" Mar 12 19:28:04.836: INFO: Pod "downwardapi-volume-39c86aeb-a0e9-4329-b480-94ddb04c6f80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.981076ms Mar 12 19:28:06.844: INFO: Pod "downwardapi-volume-39c86aeb-a0e9-4329-b480-94ddb04c6f80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014465953s STEP: Saw pod success Mar 12 19:28:06.844: INFO: Pod "downwardapi-volume-39c86aeb-a0e9-4329-b480-94ddb04c6f80" satisfied condition "success or failure" Mar 12 19:28:06.846: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-39c86aeb-a0e9-4329-b480-94ddb04c6f80 container client-container: STEP: delete the pod Mar 12 19:28:06.886: INFO: Waiting for pod downwardapi-volume-39c86aeb-a0e9-4329-b480-94ddb04c6f80 to disappear Mar 12 19:28:06.891: INFO: Pod downwardapi-volume-39c86aeb-a0e9-4329-b480-94ddb04c6f80 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:28:06.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8915" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1808,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:28:06.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:28:10.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4973" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1819,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:28:10.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a1fb8aec-db76-457c-8856-9de7fa9f84bd STEP: Creating a pod to test consume configMaps Mar 12 19:28:11.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3d463c7-55c6-441d-9097-45f7762ac1b5" in namespace "configmap-7094" to be "success or failure" Mar 12 19:28:11.022: INFO: Pod "pod-configmaps-c3d463c7-55c6-441d-9097-45f7762ac1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.883505ms Mar 12 19:28:13.024: INFO: Pod "pod-configmaps-c3d463c7-55c6-441d-9097-45f7762ac1b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013185451s STEP: Saw pod success Mar 12 19:28:13.024: INFO: Pod "pod-configmaps-c3d463c7-55c6-441d-9097-45f7762ac1b5" satisfied condition "success or failure" Mar 12 19:28:13.025: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c3d463c7-55c6-441d-9097-45f7762ac1b5 container configmap-volume-test: STEP: delete the pod Mar 12 19:28:13.050: INFO: Waiting for pod pod-configmaps-c3d463c7-55c6-441d-9097-45f7762ac1b5 to disappear Mar 12 19:28:13.057: INFO: Pod pod-configmaps-c3d463c7-55c6-441d-9097-45f7762ac1b5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:28:13.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7094" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1861,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:28:13.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 12 19:28:13.131: INFO: Waiting up to 5m0s for pod "pod-de6041ce-f4d3-4ba3-b41a-4eca4ebd14af" in namespace "emptydir-2113" to be "success or failure" Mar 12 19:28:13.135: INFO: Pod "pod-de6041ce-f4d3-4ba3-b41a-4eca4ebd14af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397659ms Mar 12 19:28:15.139: INFO: Pod "pod-de6041ce-f4d3-4ba3-b41a-4eca4ebd14af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008416084s STEP: Saw pod success Mar 12 19:28:15.139: INFO: Pod "pod-de6041ce-f4d3-4ba3-b41a-4eca4ebd14af" satisfied condition "success or failure" Mar 12 19:28:15.142: INFO: Trying to get logs from node jerma-worker pod pod-de6041ce-f4d3-4ba3-b41a-4eca4ebd14af container test-container: STEP: delete the pod Mar 12 19:28:15.162: INFO: Waiting for pod pod-de6041ce-f4d3-4ba3-b41a-4eca4ebd14af to disappear Mar 12 19:28:15.165: INFO: Pod pod-de6041ce-f4d3-4ba3-b41a-4eca4ebd14af no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:28:15.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2113" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1875,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:28:15.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:28:18.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6776" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":118,"skipped":1933,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:28:18.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 12 19:28:18.354: INFO: Waiting up to 5m0s for pod "downward-api-dd6dd3a2-4098-4195-aee8-56302cc2b515" in namespace "downward-api-5825" to be "success or failure" Mar 12 19:28:18.359: INFO: Pod "downward-api-dd6dd3a2-4098-4195-aee8-56302cc2b515": Phase="Pending", Reason="", readiness=false. Elapsed: 4.533007ms Mar 12 19:28:20.361: INFO: Pod "downward-api-dd6dd3a2-4098-4195-aee8-56302cc2b515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006527955s STEP: Saw pod success Mar 12 19:28:20.361: INFO: Pod "downward-api-dd6dd3a2-4098-4195-aee8-56302cc2b515" satisfied condition "success or failure" Mar 12 19:28:20.365: INFO: Trying to get logs from node jerma-worker pod downward-api-dd6dd3a2-4098-4195-aee8-56302cc2b515 container dapi-container: STEP: delete the pod Mar 12 19:28:20.402: INFO: Waiting for pod downward-api-dd6dd3a2-4098-4195-aee8-56302cc2b515 to disappear Mar 12 19:28:20.407: INFO: Pod downward-api-dd6dd3a2-4098-4195-aee8-56302cc2b515 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:28:20.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5825" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1988,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:28:20.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0312 19:28:30.516829 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 19:28:30.516: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:28:30.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4887" for this suite. • [SLOW TEST:10.111 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":120,"skipped":2019,"failed":0} SSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:28:30.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 12 19:28:30.587: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5769" to be "success or failure" Mar 12 19:28:30.592: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.679762ms Mar 12 19:28:32.596: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008886052s Mar 12 19:28:34.599: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012117198s STEP: Saw pod success Mar 12 19:28:34.599: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 12 19:28:34.601: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 12 19:28:34.641: INFO: Waiting for pod pod-host-path-test to disappear Mar 12 19:28:34.651: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:28:34.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5769" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2026,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:28:34.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-f25cbe5e-23c1-4b6c-a0c9-39f2d2f9d2eb in namespace container-probe-6175 Mar 12 19:28:36.758: INFO: Started pod test-webserver-f25cbe5e-23c1-4b6c-a0c9-39f2d2f9d2eb in namespace container-probe-6175 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 19:28:36.761: INFO: Initial restart count of pod test-webserver-f25cbe5e-23c1-4b6c-a0c9-39f2d2f9d2eb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:32:37.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6175" for this suite. • [SLOW TEST:242.671 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2037,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:32:37.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 STEP: creating the pod Mar 12 19:32:37.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4055' Mar 12 19:32:37.685: INFO: stderr: "" Mar 12 19:32:37.685: INFO: stdout: "pod/pause created\n" Mar 12 19:32:37.686: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 12 19:32:37.686: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4055" to be "running and ready" Mar 12 19:32:37.689: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.706587ms Mar 12 19:32:39.693: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.007006334s Mar 12 19:32:39.693: INFO: Pod "pause" satisfied condition "running and ready" Mar 12 19:32:39.693: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 12 19:32:39.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4055' Mar 12 19:32:39.813: INFO: stderr: "" Mar 12 19:32:39.813: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 12 19:32:39.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4055' Mar 12 19:32:39.910: INFO: stderr: "" Mar 12 19:32:39.910: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 12 19:32:39.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4055' Mar 12 19:32:39.992: INFO: stderr: "" Mar 12 19:32:39.992: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 12 19:32:39.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4055' Mar 12 19:32:40.060: INFO: stderr: "" Mar 12 19:32:40.061: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 STEP: using delete to clean up resources Mar 12 19:32:40.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4055' Mar 12 19:32:40.166: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 19:32:40.166: INFO: stdout: "pod \"pause\" force deleted\n" Mar 12 19:32:40.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4055' Mar 12 19:32:40.255: INFO: stderr: "No resources found in kubectl-4055 namespace.\n" Mar 12 19:32:40.255: INFO: stdout: "" Mar 12 19:32:40.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4055 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 19:32:40.317: INFO: stderr: "" Mar 12 19:32:40.318: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:32:40.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4055" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":123,"skipped":2053,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:32:40.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-97b549ec-ed98-4d1b-8da2-80affb63b5b4 STEP: Creating a pod to test consume secrets Mar 12 19:32:40.416: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44862ee3-9405-4700-af26-b0203da4eff3" in namespace "projected-4103" to be "success or failure" Mar 12 19:32:40.426: INFO: Pod "pod-projected-secrets-44862ee3-9405-4700-af26-b0203da4eff3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.90762ms Mar 12 19:32:42.428: INFO: Pod "pod-projected-secrets-44862ee3-9405-4700-af26-b0203da4eff3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012430756s STEP: Saw pod success Mar 12 19:32:42.428: INFO: Pod "pod-projected-secrets-44862ee3-9405-4700-af26-b0203da4eff3" satisfied condition "success or failure" Mar 12 19:32:42.430: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-44862ee3-9405-4700-af26-b0203da4eff3 container projected-secret-volume-test: STEP: delete the pod Mar 12 19:32:42.478: INFO: Waiting for pod pod-projected-secrets-44862ee3-9405-4700-af26-b0203da4eff3 to disappear Mar 12 19:32:42.485: INFO: Pod pod-projected-secrets-44862ee3-9405-4700-af26-b0203da4eff3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:32:42.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4103" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2063,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:32:42.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:32:42.553: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8136 I0312 19:32:42.564027 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8136, replica count: 1 I0312 19:32:43.614378 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0312 19:32:44.614529 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 19:32:44.754: INFO: Created: latency-svc-m9zq7 Mar 12 19:32:44.791: INFO: Got endpoints: latency-svc-m9zq7 [76.740346ms] Mar 12 19:32:44.825: INFO: Created: latency-svc-2dv7r Mar 12 19:32:44.834: INFO: Got endpoints: latency-svc-2dv7r [43.001991ms] Mar 12 19:32:44.855: INFO: Created: latency-svc-h2859 Mar 12 19:32:44.858: INFO: Got endpoints: latency-svc-h2859 [66.64966ms] Mar 12 19:32:44.938: INFO: Created: latency-svc-n8vvs Mar 12 19:32:44.956: INFO: Got endpoints: latency-svc-n8vvs [164.951708ms] Mar 12 19:32:44.987: INFO: Created: latency-svc-jmhgt Mar 12 19:32:44.994: INFO: Got endpoints: latency-svc-jmhgt [203.401604ms] Mar 12 19:32:45.017: INFO: Created: latency-svc-kx2td Mar 12 19:32:45.082: INFO: Got endpoints: latency-svc-kx2td [290.441085ms] Mar 12 19:32:45.082: INFO: Created: latency-svc-h6w4j Mar 12 19:32:45.084: INFO: Got endpoints: latency-svc-h6w4j [293.026232ms] Mar 12 19:32:45.114: INFO: Created: latency-svc-ldpxk Mar 12 19:32:45.117: INFO: Got endpoints: latency-svc-ldpxk [325.811802ms] Mar 12 19:32:45.150: INFO: Created: latency-svc-766kx Mar 12 19:32:45.153: INFO: Got endpoints: latency-svc-766kx [361.628703ms] Mar 12 19:32:45.179: INFO: Created: latency-svc-zr7jd Mar 12 19:32:45.225: INFO: Got endpoints: latency-svc-zr7jd [433.590304ms] Mar 12 19:32:45.227: INFO: Created: latency-svc-qgg42 Mar 12 19:32:45.232: INFO: Got endpoints: latency-svc-qgg42 [440.387645ms] Mar 12 19:32:45.258: INFO: Created: latency-svc-9r98m Mar 12 19:32:45.260: INFO: Got endpoints: latency-svc-9r98m [468.756985ms] Mar 12 19:32:45.300: INFO: Created: latency-svc-dfv5h Mar 12 19:32:45.308: INFO: Got endpoints: latency-svc-dfv5h [516.767062ms] Mar 12 19:32:45.381: INFO: Created: latency-svc-sp6d6 Mar 12 19:32:45.385: INFO: Got endpoints: latency-svc-sp6d6 [593.236478ms] Mar 12 19:32:45.408: INFO: Created: latency-svc-rc9jd Mar 12 19:32:45.411: INFO: Got endpoints: latency-svc-rc9jd [619.819621ms] Mar 12 19:32:45.437: INFO: Created: latency-svc-hw4mf Mar 12 19:32:45.442: INFO: Got endpoints: latency-svc-hw4mf [650.818767ms] Mar 12 19:32:45.461: INFO: Created: latency-svc-7v7fh Mar 12 19:32:45.468: INFO: Got endpoints: latency-svc-7v7fh [633.710279ms] Mar 12 19:32:45.519: INFO: Created: latency-svc-h9vbs Mar 12 19:32:45.540: INFO: Got endpoints: latency-svc-h9vbs [682.066169ms] Mar 12 19:32:45.557: INFO: Created: latency-svc-kg4lv Mar 12 19:32:45.564: INFO: Got endpoints: latency-svc-kg4lv [607.757144ms] Mar 12 19:32:45.581: INFO: Created: latency-svc-vxngl Mar 12 19:32:45.589: INFO: Got endpoints: latency-svc-vxngl [594.130063ms] Mar 12 19:32:45.606: INFO: Created: latency-svc-w6cwm Mar 12 19:32:45.613: INFO: Got endpoints: latency-svc-w6cwm [530.684844ms] Mar 12 19:32:45.668: INFO: Created: latency-svc-v2mxg Mar 12 19:32:45.672: INFO: Got endpoints: latency-svc-v2mxg [587.992408ms] Mar 12 19:32:45.703: INFO: Created: latency-svc-2cdkv Mar 12 19:32:45.726: INFO: Created: latency-svc-5m8jf Mar 12 19:32:45.727: INFO: Got endpoints: latency-svc-2cdkv [609.686857ms] Mar 12 19:32:45.731: INFO: Got endpoints: latency-svc-5m8jf [577.985012ms] Mar 12 19:32:45.749: INFO: Created: latency-svc-x9btk Mar 12 19:32:45.755: INFO: Got endpoints: latency-svc-x9btk [530.279886ms] Mar 12 19:32:45.821: INFO: Created: latency-svc-7x7jx Mar 12 19:32:45.824: INFO: Got endpoints: latency-svc-7x7jx [591.998731ms] Mar 12 19:32:45.846: INFO: Created: latency-svc-6gcgs Mar 12 19:32:45.853: INFO: Got endpoints: latency-svc-6gcgs [592.624584ms] Mar 12 19:32:45.871: INFO: Created: latency-svc-pfkpq Mar 12 19:32:45.877: INFO: Got endpoints: latency-svc-pfkpq [568.702368ms] Mar 12 19:32:45.899: INFO: Created: latency-svc-rj85g Mar 12 19:32:45.973: INFO: Got endpoints: latency-svc-rj85g [588.55084ms] Mar 12 19:32:45.975: INFO: Created: latency-svc-dgq67 Mar 12 19:32:45.979: INFO: Got endpoints: latency-svc-dgq67 [568.28521ms] Mar 12 19:32:46.014: INFO: Created: latency-svc-j26wx Mar 12 19:32:46.021: INFO: Got endpoints: latency-svc-j26wx [579.277486ms] Mar 12 19:32:46.046: INFO: Created: latency-svc-kf7vm Mar 12 19:32:46.067: INFO: Got endpoints: latency-svc-kf7vm [599.254552ms] Mar 12 19:32:46.117: INFO: Created: latency-svc-4nfx5 Mar 12 19:32:46.120: INFO: Got endpoints: latency-svc-4nfx5 [579.413504ms] Mar 12 19:32:46.146: INFO: Created: latency-svc-qvpsv Mar 12 19:32:46.152: INFO: Got endpoints: latency-svc-qvpsv [587.731903ms] Mar 12 19:32:46.170: INFO: Created: latency-svc-595r6 Mar 12 19:32:46.182: INFO: Got endpoints: latency-svc-595r6 [593.434907ms] Mar 12 19:32:46.273: INFO: Created: latency-svc-wrqk4 Mar 12 19:32:46.283: INFO: Got endpoints: latency-svc-wrqk4 [670.191141ms] Mar 12 19:32:46.302: INFO: Created: latency-svc-l8xqp Mar 12 19:32:46.307: INFO: Got endpoints: latency-svc-l8xqp [634.763301ms] Mar 12 19:32:46.346: INFO: Created: latency-svc-knjnc Mar 12 19:32:46.355: INFO: Got endpoints: latency-svc-knjnc [627.959626ms] Mar 12 19:32:46.421: INFO: Created: latency-svc-x8hsp Mar 12 19:32:46.427: INFO: Got endpoints: latency-svc-x8hsp [695.990324ms] Mar 12 19:32:46.452: INFO: Created: latency-svc-wvjkr Mar 12 19:32:46.460: INFO: Got endpoints: latency-svc-wvjkr [704.47087ms] Mar 12 19:32:46.482: INFO: Created: latency-svc-8tz8n Mar 12 19:32:46.489: INFO: Got endpoints: latency-svc-8tz8n [664.958104ms] Mar 12 19:32:46.585: INFO: Created: latency-svc-t6dj6 Mar 12 19:32:46.608: INFO: Got endpoints: latency-svc-t6dj6 [755.22644ms] Mar 12 19:32:46.609: INFO: Created: latency-svc-n92hj Mar 12 19:32:46.615: INFO: Got endpoints: latency-svc-n92hj [737.699379ms] Mar 12 19:32:46.637: INFO: Created: latency-svc-zhxqr Mar 12 19:32:46.645: INFO: Got endpoints: latency-svc-zhxqr [671.574059ms] Mar 12 19:32:46.662: INFO: Created: latency-svc-878fq Mar 12 19:32:46.681: INFO: Got endpoints: latency-svc-878fq [701.093964ms] Mar 12 19:32:46.723: INFO: Created: latency-svc-9mzzb Mar 12 19:32:46.725: INFO: Got endpoints: latency-svc-9mzzb [703.125663ms] Mar 12 19:32:46.757: INFO: Created: latency-svc-4vbkj Mar 12 19:32:46.764: INFO: Got endpoints: latency-svc-4vbkj [696.798823ms] Mar 12 19:32:46.787: INFO: Created: latency-svc-dsdvh Mar 12 19:32:46.800: INFO: Got endpoints: latency-svc-dsdvh [680.121277ms] Mar 12 19:32:46.860: INFO: Created: latency-svc-lzpgc Mar 12 19:32:46.862: INFO: Got endpoints: latency-svc-lzpgc [710.44207ms] Mar 12 19:32:46.885: INFO: Created: latency-svc-5w97q Mar 12 19:32:46.891: INFO: Got endpoints: latency-svc-5w97q [708.766776ms] Mar 12 19:32:46.913: INFO: Created: latency-svc-82mr5 Mar 12 19:32:46.932: INFO: Got endpoints: latency-svc-82mr5 [648.595343ms] Mar 12 19:32:46.955: INFO: Created: latency-svc-xwl8x Mar 12 19:32:46.991: INFO: Got endpoints: latency-svc-xwl8x [684.076219ms] Mar 12 19:32:47.028: INFO: Created: latency-svc-bdxd7 Mar 12 19:32:47.036: INFO: Got endpoints: latency-svc-bdxd7 [680.555065ms] Mar 12 19:32:47.064: INFO: Created: latency-svc-c89hn Mar 12 19:32:47.083: INFO: Got endpoints: latency-svc-c89hn [655.491506ms] Mar 12 19:32:47.141: INFO: Created: latency-svc-mjdrf Mar 12 19:32:47.177: INFO: Created: latency-svc-gpgm8 Mar 12 19:32:47.177: INFO: Got endpoints: latency-svc-mjdrf [717.489555ms] Mar 12 19:32:47.181: INFO: Got endpoints: latency-svc-gpgm8 [691.694927ms] Mar 12 19:32:47.217: INFO: Created: latency-svc-trnwg Mar 12 19:32:47.232: INFO: Got endpoints: latency-svc-trnwg [624.287373ms] Mar 12 19:32:47.310: INFO: Created: latency-svc-zjsb6 Mar 12 19:32:47.319: INFO: Got endpoints: latency-svc-zjsb6 [704.362123ms] Mar 12 19:32:47.340: INFO: Created: latency-svc-l7slj Mar 12 19:32:47.352: INFO: Got endpoints: latency-svc-l7slj [706.602172ms] Mar 12 19:32:47.371: INFO: Created: latency-svc-pgwzd Mar 12 19:32:47.380: INFO: Got endpoints: latency-svc-pgwzd [699.446791ms] Mar 12 19:32:47.401: INFO: Created: latency-svc-lbl8p Mar 12 19:32:47.453: INFO: Got endpoints: latency-svc-lbl8p [727.984254ms] Mar 12 19:32:47.471: INFO: Created: latency-svc-rqkdx Mar 12 19:32:47.490: INFO: Got endpoints: latency-svc-rqkdx [726.192728ms] Mar 12 19:32:47.514: INFO: Created: latency-svc-lvngw Mar 12 19:32:47.521: INFO: Got endpoints: latency-svc-lvngw [720.976609ms] Mar 12 19:32:47.590: INFO: Created: latency-svc-bhmlr Mar 12 19:32:47.634: INFO: Got endpoints: latency-svc-bhmlr [772.027801ms] Mar 12 19:32:47.664: INFO: Created: latency-svc-lw64s Mar 12 19:32:47.669: INFO: Got endpoints: latency-svc-lw64s [777.733596ms] Mar 12 19:32:47.728: INFO: Created: latency-svc-jrxx4 Mar 12 19:32:47.735: INFO: Got endpoints: latency-svc-jrxx4 [803.242278ms] Mar 12 19:32:47.756: INFO: Created: latency-svc-bvzqk Mar 12 19:32:47.759: INFO: Got endpoints: latency-svc-bvzqk [767.828286ms] Mar 12 19:32:47.967: INFO: Created: latency-svc-8tdhm Mar 12 19:32:47.971: INFO: Got endpoints: latency-svc-8tdhm [935.498408ms] Mar 12 19:32:48.013: INFO: Created: latency-svc-vjdcg Mar 12 19:32:48.018: INFO: Got endpoints: latency-svc-vjdcg [934.782985ms] Mar 12 19:32:48.062: INFO: Created: latency-svc-2rh2m Mar 12 19:32:48.066: INFO: Got endpoints: latency-svc-2rh2m [888.760044ms] Mar 12 19:32:48.103: INFO: Created: latency-svc-qx4qt Mar 12 19:32:48.114: INFO: Got endpoints: latency-svc-qx4qt [933.430861ms] Mar 12 19:32:48.156: INFO: Created: latency-svc-2pmmn Mar 12 19:32:48.165: INFO: Got endpoints: latency-svc-2pmmn [932.682381ms] Mar 12 19:32:48.182: INFO: Created: latency-svc-kw772 Mar 12 19:32:48.189: INFO: Got endpoints: latency-svc-kw772 [869.839895ms] Mar 12 19:32:48.227: INFO: Created: latency-svc-7nt92 Mar 12 19:32:48.232: INFO: Got endpoints: latency-svc-7nt92 [879.913083ms] Mar 12 19:32:48.271: INFO: Created: latency-svc-jh5vq Mar 12 19:32:48.288: INFO: Got endpoints: latency-svc-jh5vq [908.035987ms] Mar 12 19:32:48.307: INFO: Created: latency-svc-gdgmt Mar 12 19:32:48.315: INFO: Got endpoints: latency-svc-gdgmt [862.617728ms] Mar 12 19:32:48.357: INFO: Created: latency-svc-cxf7j Mar 12 19:32:48.364: INFO: Got endpoints: latency-svc-cxf7j [873.205148ms] Mar 12 19:32:48.408: INFO: Created: latency-svc-dm8q7 Mar 12 19:32:48.412: INFO: Got endpoints: latency-svc-dm8q7 [891.1241ms] Mar 12 19:32:48.432: INFO: Created: latency-svc-drncm Mar 12 19:32:48.436: INFO: Got endpoints: latency-svc-drncm [801.854888ms] Mar 12 19:32:48.500: INFO: Created: latency-svc-2f7ph Mar 12 19:32:48.528: INFO: Got endpoints: latency-svc-2f7ph [859.353367ms] Mar 12 19:32:48.528: INFO: Created: latency-svc-n6hbb Mar 12 19:32:48.533: INFO: Got endpoints: latency-svc-n6hbb [797.845452ms] Mar 12 19:32:48.558: INFO: Created: latency-svc-cfdss Mar 12 19:32:48.564: INFO: Got endpoints: latency-svc-cfdss [804.283651ms] Mar 12 19:32:48.651: INFO: Created: latency-svc-gsxnd Mar 12 19:32:48.654: INFO: Got endpoints: latency-svc-gsxnd [682.320389ms] Mar 12 19:32:48.720: INFO: Created: latency-svc-dpzx5 Mar 12 19:32:48.744: INFO: Got endpoints: latency-svc-dpzx5 [725.94279ms] Mar 12 19:32:48.793: INFO: Created: latency-svc-mdb66 Mar 12 19:32:48.840: INFO: Got endpoints: latency-svc-mdb66 [773.695215ms] Mar 12 19:32:48.870: INFO: Created: latency-svc-hnwz4 Mar 12 19:32:48.876: INFO: Got endpoints: latency-svc-hnwz4 [761.962416ms] Mar 12 19:32:48.937: INFO: Created: latency-svc-z4745 Mar 12 19:32:48.956: INFO: Got endpoints: latency-svc-z4745 [790.649988ms] Mar 12 19:32:48.984: INFO: Created: latency-svc-j4kt5 Mar 12 19:32:49.003: INFO: Got endpoints: latency-svc-j4kt5 [814.381237ms] Mar 12 19:32:49.027: INFO: Created: latency-svc-rp4l5 Mar 12 19:32:49.063: INFO: Got endpoints: latency-svc-rp4l5 [831.288845ms] Mar 12 19:32:49.075: INFO: Created: latency-svc-2mhmn Mar 12 19:32:49.081: INFO: Got endpoints: latency-svc-2mhmn [792.617121ms] Mar 12 19:32:49.098: INFO: Created: latency-svc-jr57t Mar 12 19:32:49.116: INFO: Got endpoints: latency-svc-jr57t [800.917607ms] Mar 12 19:32:49.140: INFO: Created: latency-svc-p9wxq Mar 12 19:32:49.147: INFO: Got endpoints: latency-svc-p9wxq [783.621829ms] Mar 12 19:32:49.217: INFO: Created: latency-svc-78x24 Mar 12 19:32:49.226: INFO: Got endpoints: latency-svc-78x24 [813.339696ms] Mar 12 19:32:49.244: INFO: Created: latency-svc-wrvmm Mar 12 19:32:49.266: INFO: Got endpoints: latency-svc-wrvmm [829.802841ms] Mar 12 19:32:49.266: INFO: Created: latency-svc-p6vlh Mar 12 19:32:49.274: INFO: Got endpoints: latency-svc-p6vlh [745.925456ms] Mar 12 19:32:49.387: INFO: Created: latency-svc-bzw8v Mar 12 19:32:49.389: INFO: Got endpoints: latency-svc-bzw8v [855.906457ms] Mar 12 19:32:49.424: INFO: Created: latency-svc-szmgm Mar 12 19:32:49.447: INFO: Created: latency-svc-jjpg2 Mar 12 19:32:49.447: INFO: Got endpoints: latency-svc-szmgm [883.418755ms] Mar 12 19:32:49.455: INFO: Got endpoints: latency-svc-jjpg2 [801.467621ms] Mar 12 19:32:49.518: INFO: Created: latency-svc-l687b Mar 12 19:32:49.567: INFO: Created: latency-svc-v676p Mar 12 19:32:49.568: INFO: Got endpoints: latency-svc-l687b [823.904895ms] Mar 12 19:32:49.582: INFO: Got endpoints: latency-svc-v676p [741.982611ms] Mar 12 19:32:49.662: INFO: Created: latency-svc-sh6gx Mar 12 19:32:49.664: INFO: Got endpoints: latency-svc-sh6gx [787.465648ms] Mar 12 19:32:49.705: INFO: Created: latency-svc-hlwpm Mar 12 19:32:49.716: INFO: Got endpoints: latency-svc-hlwpm [760.848428ms] Mar 12 19:32:49.741: INFO: Created: latency-svc-gvczh Mar 12 19:32:49.754: INFO: Got endpoints: latency-svc-gvczh [750.891379ms] Mar 12 19:32:49.806: INFO: Created: latency-svc-9q84c Mar 12 19:32:49.810: INFO: Got endpoints: latency-svc-9q84c [747.508113ms] Mar 12 19:32:49.839: INFO: Created: latency-svc-8mnqs Mar 12 19:32:49.842: INFO: Got endpoints: latency-svc-8mnqs [761.382748ms] Mar 12 19:32:49.867: INFO: Created: latency-svc-427hh Mar 12 19:32:49.871: INFO: Got endpoints: latency-svc-427hh [754.777535ms] Mar 12 19:32:49.891: INFO: Created: latency-svc-2h6rf Mar 12 19:32:49.896: INFO: Got endpoints: latency-svc-2h6rf [748.488465ms] Mar 12 19:32:49.938: INFO: Created: latency-svc-w9bfv Mar 12 19:32:49.944: INFO: Got endpoints: latency-svc-w9bfv [718.24268ms] Mar 12 19:32:49.977: INFO: Created: latency-svc-zbrdr Mar 12 19:32:49.985: INFO: Got endpoints: latency-svc-zbrdr [718.640501ms] Mar 12 19:32:50.007: INFO: Created: latency-svc-hv2s7 Mar 12 19:32:50.009: INFO: Got endpoints: latency-svc-hv2s7 [734.666413ms] Mar 12 19:32:50.029: INFO: Created: latency-svc-dblvz Mar 12 19:32:50.034: INFO: Got endpoints: latency-svc-dblvz [644.864287ms] Mar 12 19:32:50.081: INFO: Created: latency-svc-9w4rl Mar 12 19:32:50.083: INFO: Got endpoints: latency-svc-9w4rl [636.335662ms] Mar 12 19:32:50.109: INFO: Created: latency-svc-j46jh Mar 12 19:32:50.112: INFO: Got endpoints: latency-svc-j46jh [657.085591ms] Mar 12 19:32:50.151: INFO: Created: latency-svc-qr4g2 Mar 12 19:32:50.155: INFO: Got endpoints: latency-svc-qr4g2 [587.362402ms] Mar 12 19:32:50.173: INFO: Created: latency-svc-p7z5l Mar 12 19:32:50.237: INFO: Got endpoints: latency-svc-p7z5l [654.858259ms] Mar 12 19:32:50.276: INFO: Created: latency-svc-k9zz7 Mar 12 19:32:50.281: INFO: Got endpoints: latency-svc-k9zz7 [617.061248ms] Mar 12 19:32:50.304: INFO: Created: latency-svc-spjv6 Mar 12 19:32:50.318: INFO: Got endpoints: latency-svc-spjv6 [601.224463ms] Mar 12 19:32:50.369: INFO: Created: latency-svc-vsd6s Mar 12 19:32:50.383: INFO: Got endpoints: latency-svc-vsd6s [629.095442ms] Mar 12 19:32:50.413: INFO: Created: latency-svc-tr846 Mar 12 19:32:50.419: INFO: Got endpoints: latency-svc-tr846 [608.943941ms] Mar 12 19:32:50.439: INFO: Created: latency-svc-2wvgg Mar 12 19:32:50.456: INFO: Got endpoints: latency-svc-2wvgg [613.753986ms] Mar 12 19:32:50.506: INFO: Created: latency-svc-j99dt Mar 12 19:32:50.510: INFO: Got endpoints: latency-svc-j99dt [638.914729ms] Mar 12 19:32:50.529: INFO: Created: latency-svc-gkql8 Mar 12 19:32:50.547: INFO: Got endpoints: latency-svc-gkql8 [650.997643ms] Mar 12 19:32:50.576: INFO: Created: latency-svc-9zc8w Mar 12 19:32:50.583: INFO: Got endpoints: latency-svc-9zc8w [638.759827ms] Mar 12 19:32:50.605: INFO: Created: latency-svc-cmfvx Mar 12 19:32:50.740: INFO: Got endpoints: latency-svc-cmfvx [754.770212ms] Mar 12 19:32:50.750: INFO: Created: latency-svc-lhpnp Mar 12 19:32:50.758: INFO: Got endpoints: latency-svc-lhpnp [748.665882ms] Mar 12 19:32:50.782: INFO: Created: latency-svc-z7bbh Mar 12 19:32:50.788: INFO: Got endpoints: latency-svc-z7bbh [754.755807ms] Mar 12 19:32:50.817: INFO: Created: latency-svc-vfh4h Mar 12 19:32:50.889: INFO: Got endpoints: latency-svc-vfh4h [805.936437ms] Mar 12 19:32:50.891: INFO: Created: latency-svc-hkvkv Mar 12 19:32:50.902: INFO: Got endpoints: latency-svc-hkvkv [790.124854ms] Mar 12 19:32:50.931: INFO: Created: latency-svc-7jw69 Mar 12 19:32:50.939: INFO: Got endpoints: latency-svc-7jw69 [783.616021ms] Mar 12 19:32:50.960: INFO: Created: latency-svc-dtsg6 Mar 12 19:32:50.969: INFO: Got endpoints: latency-svc-dtsg6 [732.074387ms] Mar 12 19:32:51.028: INFO: Created: latency-svc-hp9kv Mar 12 19:32:51.043: INFO: Got endpoints: latency-svc-hp9kv [762.545024ms] Mar 12 19:32:51.062: INFO: Created: latency-svc-c594t Mar 12 19:32:51.065: INFO: Got endpoints: latency-svc-c594t [747.289075ms] Mar 12 19:32:51.087: INFO: Created: latency-svc-l5dkm Mar 12 19:32:51.090: INFO: Got endpoints: latency-svc-l5dkm [706.373488ms] Mar 12 19:32:51.110: INFO: Created: latency-svc-p6q6f Mar 12 19:32:51.114: INFO: Got endpoints: latency-svc-p6q6f [694.563301ms] Mar 12 19:32:51.159: INFO: Created: latency-svc-sbwq7 Mar 12 19:32:51.175: INFO: Got endpoints: latency-svc-sbwq7 [718.720985ms] Mar 12 19:32:51.200: INFO: Created: latency-svc-bmwcb Mar 12 19:32:51.205: INFO: Got endpoints: latency-svc-bmwcb [694.535646ms] Mar 12 19:32:51.224: INFO: Created: latency-svc-r9c7f Mar 12 19:32:51.229: INFO: Got endpoints: latency-svc-r9c7f [681.921119ms] Mar 12 19:32:51.248: INFO: Created: latency-svc-nhsg6 Mar 12 19:32:51.297: INFO: Got endpoints: latency-svc-nhsg6 [714.402917ms] Mar 12 19:32:51.302: INFO: Created: latency-svc-ln49k Mar 12 19:32:51.319: INFO: Got endpoints: latency-svc-ln49k [579.704535ms] Mar 12 19:32:51.337: INFO: Created: latency-svc-ns7gb Mar 12 19:32:51.356: INFO: Created: latency-svc-gnhbk Mar 12 19:32:51.356: INFO: Got endpoints: latency-svc-ns7gb [598.246308ms] Mar 12 19:32:51.358: INFO: Got endpoints: latency-svc-gnhbk [569.712372ms] Mar 12 19:32:51.379: INFO: Created: latency-svc-x474t Mar 12 19:32:51.382: INFO: Got endpoints: latency-svc-x474t [492.209884ms] Mar 12 19:32:51.428: INFO: Created: latency-svc-rqh9j Mar 12 19:32:51.440: INFO: Got endpoints: latency-svc-rqh9j [537.926177ms] Mar 12 19:32:51.459: INFO: Created: latency-svc-m75p5 Mar 12 19:32:51.466: INFO: Got endpoints: latency-svc-m75p5 [527.207667ms] Mar 12 19:32:51.482: INFO: Created: latency-svc-269cr Mar 12 19:32:51.489: INFO: Got endpoints: latency-svc-269cr [520.393457ms] Mar 12 19:32:51.511: INFO: Created: latency-svc-zfksr Mar 12 19:32:51.514: INFO: Got endpoints: latency-svc-zfksr [470.579966ms] Mar 12 19:32:51.578: INFO: Created: latency-svc-mbx9h Mar 12 19:32:51.580: INFO: Got endpoints: latency-svc-mbx9h [514.815465ms] Mar 12 19:32:51.639: INFO: Created: latency-svc-mhgqf Mar 12 19:32:51.647: INFO: Got endpoints: latency-svc-mhgqf [556.680757ms] Mar 12 19:32:51.668: INFO: Created: latency-svc-xkpmx Mar 12 19:32:51.677: INFO: Got endpoints: latency-svc-xkpmx [562.79694ms] Mar 12 19:32:51.723: INFO: Created: latency-svc-ll2b2 Mar 12 19:32:51.731: INFO: Got endpoints: latency-svc-ll2b2 [556.52653ms] Mar 12 19:32:51.752: INFO: Created: latency-svc-dksp4 Mar 12 19:32:51.761: INFO: Got endpoints: latency-svc-dksp4 [556.827888ms] Mar 12 19:32:51.789: INFO: Created: latency-svc-mxscf Mar 12 19:32:51.811: INFO: Got endpoints: latency-svc-mxscf [582.54159ms] Mar 12 19:32:51.861: INFO: Created: latency-svc-gntgl Mar 12 19:32:51.878: INFO: Got endpoints: latency-svc-gntgl [581.070508ms] Mar 12 19:32:51.879: INFO: Created: latency-svc-z5brj Mar 12 19:32:51.882: INFO: Got endpoints: latency-svc-z5brj [562.339354ms] Mar 12 19:32:51.927: INFO: Created: latency-svc-8p8zw Mar 12 19:32:51.943: INFO: Got endpoints: latency-svc-8p8zw [587.356271ms] Mar 12 19:32:52.015: INFO: Created: latency-svc-f64cz Mar 12 19:32:52.027: INFO: Got endpoints: latency-svc-f64cz [668.469268ms] Mar 12 19:32:52.052: INFO: Created: latency-svc-l4r8l Mar 12 19:32:52.057: INFO: Got endpoints: latency-svc-l4r8l [675.346439ms] Mar 12 19:32:52.083: INFO: Created: latency-svc-fwqtm Mar 12 19:32:52.087: INFO: Got endpoints: latency-svc-fwqtm [647.177581ms] Mar 12 19:32:52.107: INFO: Created: latency-svc-rlnwh Mar 12 19:32:52.159: INFO: Got endpoints: latency-svc-rlnwh [693.492442ms] Mar 12 19:32:52.161: INFO: Created: latency-svc-p7x7m Mar 12 19:32:52.201: INFO: Got endpoints: latency-svc-p7x7m [712.048984ms] Mar 12 19:32:52.220: INFO: Created: latency-svc-vx4gv Mar 12 19:32:52.221: INFO: Got endpoints: latency-svc-vx4gv [707.630078ms] Mar 12 19:32:52.246: INFO: Created: latency-svc-kvtsf Mar 12 19:32:52.291: INFO: Got endpoints: latency-svc-kvtsf [710.691207ms] Mar 12 19:32:52.317: INFO: Created: latency-svc-2hzkl Mar 12 19:32:52.339: INFO: Got endpoints: latency-svc-2hzkl [692.882842ms] Mar 12 19:32:52.364: INFO: Created: latency-svc-5pdmw Mar 12 19:32:52.373: INFO: Got endpoints: latency-svc-5pdmw [695.719935ms] Mar 12 19:32:52.429: INFO: Created: latency-svc-5phzx Mar 12 19:32:52.446: INFO: Got endpoints: latency-svc-5phzx [714.608717ms] Mar 12 19:32:52.473: INFO: Created: latency-svc-g99jr Mar 12 19:32:52.481: INFO: Got endpoints: latency-svc-g99jr [719.696668ms] Mar 12 19:32:52.499: INFO: Created: latency-svc-2jrqx Mar 12 19:32:52.500: INFO: Got endpoints: latency-svc-2jrqx [689.118709ms] Mar 12 19:32:52.580: INFO: Created: latency-svc-5fp27 Mar 12 19:32:52.583: INFO: Got endpoints: latency-svc-5fp27 [705.021193ms] Mar 12 19:32:52.611: INFO: Created: latency-svc-4qml2 Mar 12 19:32:52.620: INFO: Got endpoints: latency-svc-4qml2 [738.051207ms] Mar 12 19:32:52.641: INFO: Created: latency-svc-7h2t4 Mar 12 19:32:52.644: INFO: Got endpoints: latency-svc-7h2t4 [700.42234ms] Mar 12 19:32:52.665: INFO: Created: latency-svc-9pknv Mar 12 19:32:52.692: INFO: Got endpoints: latency-svc-9pknv [664.943254ms] Mar 12 19:32:52.717: INFO: Created: latency-svc-6p4f5 Mar 12 19:32:52.736: INFO: Got endpoints: latency-svc-6p4f5 [678.503745ms] Mar 12 19:32:52.754: INFO: Created: latency-svc-vlhvm Mar 12 19:32:52.759: INFO: Got endpoints: latency-svc-vlhvm [671.110137ms] Mar 12 19:32:52.779: INFO: Created: latency-svc-wctnb Mar 12 19:32:52.783: INFO: Got endpoints: latency-svc-wctnb [623.683901ms] Mar 12 19:32:52.830: INFO: Created: latency-svc-zznbf Mar 12 19:32:52.839: INFO: Got endpoints: latency-svc-zznbf [637.45379ms] Mar 12 19:32:52.863: INFO: Created: latency-svc-8p4ll Mar 12 19:32:52.868: INFO: Got endpoints: latency-svc-8p4ll [645.995008ms] Mar 12 19:32:52.887: INFO: Created: latency-svc-h9bj8 Mar 12 19:32:52.905: INFO: Got endpoints: latency-svc-h9bj8 [613.868783ms] Mar 12 19:32:52.973: INFO: Created: latency-svc-66fjq Mar 12 19:32:52.975: INFO: Got endpoints: latency-svc-66fjq [635.453262ms] Mar 12 19:32:53.013: INFO: Created: latency-svc-rlzh2 Mar 12 19:32:53.019: INFO: Got endpoints: latency-svc-rlzh2 [646.470774ms] Mar 12 19:32:53.037: INFO: Created: latency-svc-44fpw Mar 12 19:32:53.044: INFO: Got endpoints: latency-svc-44fpw [597.585224ms] Mar 12 19:32:53.072: INFO: Created: latency-svc-tvc94 Mar 12 19:32:53.111: INFO: Got endpoints: latency-svc-tvc94 [629.973456ms] Mar 12 19:32:53.126: INFO: Created: latency-svc-2g5ws Mar 12 19:32:53.134: INFO: Got endpoints: latency-svc-2g5ws [633.158389ms] Mar 12 19:32:53.152: INFO: Created: latency-svc-9rrgd Mar 12 19:32:53.158: INFO: Got endpoints: latency-svc-9rrgd [574.846382ms] Mar 12 19:32:53.177: INFO: Created: latency-svc-4wss2 Mar 12 19:32:53.200: INFO: Got endpoints: latency-svc-4wss2 [579.918337ms] Mar 12 19:32:53.201: INFO: Created: latency-svc-l4n9s Mar 12 19:32:53.248: INFO: Got endpoints: latency-svc-l4n9s [604.495819ms] Mar 12 19:32:53.270: INFO: Created: latency-svc-5k2nv Mar 12 19:32:53.279: INFO: Got endpoints: latency-svc-5k2nv [587.260313ms] Mar 12 19:32:53.302: INFO: Created: latency-svc-z9tpj Mar 12 19:32:53.309: INFO: Got endpoints: latency-svc-z9tpj [573.573063ms] Mar 12 19:32:53.325: INFO: Created: latency-svc-pl7w9 Mar 12 19:32:53.328: INFO: Got endpoints: latency-svc-pl7w9 [569.384325ms] Mar 12 19:32:53.387: INFO: Created: latency-svc-hp2cj Mar 12 19:32:53.476: INFO: Got endpoints: latency-svc-hp2cj [692.942086ms] Mar 12 19:32:53.486: INFO: Created: latency-svc-wtdd5 Mar 12 19:32:53.542: INFO: Got endpoints: latency-svc-wtdd5 [703.083471ms] Mar 12 19:32:53.572: INFO: Created: latency-svc-qr9rc Mar 12 19:32:53.574: INFO: Got endpoints: latency-svc-qr9rc [706.548674ms] Mar 12 19:32:53.595: INFO: Created: latency-svc-dsnlm Mar 12 19:32:53.598: INFO: Got endpoints: latency-svc-dsnlm [693.667521ms] Mar 12 19:32:53.619: INFO: Created: latency-svc-nsg9t Mar 12 19:32:53.623: INFO: Got endpoints: latency-svc-nsg9t [647.789241ms] Mar 12 19:32:53.686: INFO: Created: latency-svc-n7g9q Mar 12 19:32:53.688: INFO: Got endpoints: latency-svc-n7g9q [669.321018ms] Mar 12 19:32:53.720: INFO: Created: latency-svc-fm7zd Mar 12 19:32:53.726: INFO: Got endpoints: latency-svc-fm7zd [682.141341ms] Mar 12 19:32:53.745: INFO: Created: latency-svc-ssvvm Mar 12 19:32:53.750: INFO: Got endpoints: latency-svc-ssvvm [639.029874ms] Mar 12 19:32:53.769: INFO: Created: latency-svc-jqrvz Mar 12 19:32:53.774: INFO: Got endpoints: latency-svc-jqrvz [640.486054ms] Mar 12 19:32:53.825: INFO: Created: latency-svc-zjjrf Mar 12 19:32:53.826: INFO: Got endpoints: latency-svc-zjjrf [667.570296ms] Mar 12 19:32:53.860: INFO: Created: latency-svc-bnsg2 Mar 12 19:32:53.875: INFO: Got endpoints: latency-svc-bnsg2 [675.646048ms] Mar 12 19:32:53.906: INFO: Created: latency-svc-x5vhd Mar 12 19:32:53.919: INFO: Got endpoints: latency-svc-x5vhd [670.782772ms] Mar 12 19:32:53.979: INFO: Created: latency-svc-ns8wl Mar 12 19:32:53.991: INFO: Got endpoints: latency-svc-ns8wl [712.094657ms] Mar 12 19:32:53.991: INFO: Latencies: [43.001991ms 66.64966ms 164.951708ms 203.401604ms 290.441085ms 293.026232ms 325.811802ms 361.628703ms 433.590304ms 440.387645ms 468.756985ms 470.579966ms 492.209884ms 514.815465ms 516.767062ms 520.393457ms 527.207667ms 530.279886ms 530.684844ms 537.926177ms 556.52653ms 556.680757ms 556.827888ms 562.339354ms 562.79694ms 568.28521ms 568.702368ms 569.384325ms 569.712372ms 573.573063ms 574.846382ms 577.985012ms 579.277486ms 579.413504ms 579.704535ms 579.918337ms 581.070508ms 582.54159ms 587.260313ms 587.356271ms 587.362402ms 587.731903ms 587.992408ms 588.55084ms 591.998731ms 592.624584ms 593.236478ms 593.434907ms 594.130063ms 597.585224ms 598.246308ms 599.254552ms 601.224463ms 604.495819ms 607.757144ms 608.943941ms 609.686857ms 613.753986ms 613.868783ms 617.061248ms 619.819621ms 623.683901ms 624.287373ms 627.959626ms 629.095442ms 629.973456ms 633.158389ms 633.710279ms 634.763301ms 635.453262ms 636.335662ms 637.45379ms 638.759827ms 638.914729ms 639.029874ms 640.486054ms 644.864287ms 645.995008ms 646.470774ms 647.177581ms 647.789241ms 648.595343ms 650.818767ms 650.997643ms 654.858259ms 655.491506ms 657.085591ms 664.943254ms 664.958104ms 667.570296ms 668.469268ms 669.321018ms 670.191141ms 670.782772ms 671.110137ms 671.574059ms 675.346439ms 675.646048ms 678.503745ms 680.121277ms 680.555065ms 681.921119ms 682.066169ms 682.141341ms 682.320389ms 684.076219ms 689.118709ms 691.694927ms 692.882842ms 692.942086ms 693.492442ms 693.667521ms 694.535646ms 694.563301ms 695.719935ms 695.990324ms 696.798823ms 699.446791ms 700.42234ms 701.093964ms 703.083471ms 703.125663ms 704.362123ms 704.47087ms 705.021193ms 706.373488ms 706.548674ms 706.602172ms 707.630078ms 708.766776ms 710.44207ms 710.691207ms 712.048984ms 712.094657ms 714.402917ms 714.608717ms 717.489555ms 718.24268ms 718.640501ms 718.720985ms 719.696668ms 720.976609ms 725.94279ms 726.192728ms 727.984254ms 732.074387ms 734.666413ms 737.699379ms 738.051207ms 741.982611ms 745.925456ms 747.289075ms 747.508113ms 748.488465ms 748.665882ms 750.891379ms 754.755807ms 754.770212ms 754.777535ms 755.22644ms 760.848428ms 761.382748ms 761.962416ms 762.545024ms 767.828286ms 772.027801ms 773.695215ms 777.733596ms 783.616021ms 783.621829ms 787.465648ms 790.124854ms 790.649988ms 792.617121ms 797.845452ms 800.917607ms 801.467621ms 801.854888ms 803.242278ms 804.283651ms 805.936437ms 813.339696ms 814.381237ms 823.904895ms 829.802841ms 831.288845ms 855.906457ms 859.353367ms 862.617728ms 869.839895ms 873.205148ms 879.913083ms 883.418755ms 888.760044ms 891.1241ms 908.035987ms 932.682381ms 933.430861ms 934.782985ms 935.498408ms] Mar 12 19:32:53.991: INFO: 50 %ile: 680.555065ms Mar 12 19:32:53.991: INFO: 90 %ile: 805.936437ms Mar 12 19:32:53.991: INFO: 99 %ile: 934.782985ms Mar 12 19:32:53.991: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:32:53.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8136" for this suite. • [SLOW TEST:11.525 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":125,"skipped":2080,"failed":0} [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:32:54.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:32:54.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-586ce658-03ad-4bc5-8e5b-17c098ad1d9d" in namespace "downward-api-847" to be "success or failure" Mar 12 19:32:54.071: INFO: Pod "downwardapi-volume-586ce658-03ad-4bc5-8e5b-17c098ad1d9d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.042284ms Mar 12 19:32:56.074: INFO: Pod "downwardapi-volume-586ce658-03ad-4bc5-8e5b-17c098ad1d9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010269529s Mar 12 19:32:58.080: INFO: Pod "downwardapi-volume-586ce658-03ad-4bc5-8e5b-17c098ad1d9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016555385s STEP: Saw pod success Mar 12 19:32:58.080: INFO: Pod "downwardapi-volume-586ce658-03ad-4bc5-8e5b-17c098ad1d9d" satisfied condition "success or failure" Mar 12 19:32:58.083: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-586ce658-03ad-4bc5-8e5b-17c098ad1d9d container client-container: STEP: delete the pod Mar 12 19:32:58.110: INFO: Waiting for pod downwardapi-volume-586ce658-03ad-4bc5-8e5b-17c098ad1d9d to disappear Mar 12 19:32:58.116: INFO: Pod downwardapi-volume-586ce658-03ad-4bc5-8e5b-17c098ad1d9d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:32:58.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-847" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2080,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:32:58.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:32:58.746: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:33:01.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 19:33:02.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 19:33:03.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 19:33:04.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 19:33:05.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 19:33:06.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 19:33:07.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 19:33:08.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 19:33:09.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 19:33:10.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:33:10.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2774-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:33:11.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2282" for this suite. STEP: Destroying namespace "webhook-2282-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.876 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":127,"skipped":2101,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:33:12.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-378 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 19:33:12.107: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 12 19:33:32.278: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.212:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-378 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:33:32.278: INFO: >>> kubeConfig: /root/.kube/config I0312 19:33:32.302853 6 log.go:172] (0xc00173a2c0) (0xc0028452c0) Create stream I0312 19:33:32.302880 6 log.go:172] (0xc00173a2c0) (0xc0028452c0) Stream added, broadcasting: 1 I0312 19:33:32.304575 6 log.go:172] (0xc00173a2c0) Reply frame received for 1 I0312 19:33:32.304605 6 log.go:172] (0xc00173a2c0) (0xc002845360) Create stream I0312 19:33:32.304618 6 log.go:172] (0xc00173a2c0) (0xc002845360) Stream added, broadcasting: 3 I0312 19:33:32.305360 6 log.go:172] (0xc00173a2c0) Reply frame received for 3 I0312 19:33:32.305387 6 log.go:172] (0xc00173a2c0) (0xc001df9220) Create stream I0312 19:33:32.305399 6 log.go:172] (0xc00173a2c0) (0xc001df9220) Stream added, broadcasting: 5 I0312 19:33:32.306221 6 log.go:172] (0xc00173a2c0) Reply frame received for 5 I0312 19:33:32.380564 6 log.go:172] (0xc00173a2c0) Data frame received for 3 I0312 19:33:32.380603 6 log.go:172] (0xc002845360) (3) Data frame handling I0312 19:33:32.380615 6 log.go:172] (0xc002845360) (3) Data frame sent I0312 19:33:32.380624 6 log.go:172] (0xc00173a2c0) Data frame received for 3 I0312 19:33:32.380632 6 log.go:172] (0xc002845360) (3) Data frame handling I0312 19:33:32.380653 6 log.go:172] (0xc00173a2c0) Data frame received for 5 I0312 19:33:32.380673 6 log.go:172] (0xc001df9220) (5) Data frame handling I0312 19:33:32.382079 6 log.go:172] (0xc00173a2c0) Data frame received for 1 I0312 19:33:32.382094 6 log.go:172] (0xc0028452c0) (1) Data frame handling I0312 19:33:32.382101 6 log.go:172] (0xc0028452c0) (1) Data frame sent I0312 19:33:32.382109 6 log.go:172] (0xc00173a2c0) (0xc0028452c0) Stream removed, broadcasting: 1 I0312 19:33:32.382143 6 log.go:172] (0xc00173a2c0) Go away received I0312 19:33:32.382248 6 log.go:172] (0xc00173a2c0) (0xc0028452c0) Stream removed, broadcasting: 1 I0312 19:33:32.382266 6 log.go:172] (0xc00173a2c0) (0xc002845360) Stream removed, broadcasting: 3 I0312 19:33:32.382278 6 log.go:172] (0xc00173a2c0) (0xc001df9220) Stream removed, broadcasting: 5 Mar 12 19:33:32.382: INFO: Found all expected endpoints: [netserver-0] Mar 12 19:33:32.384: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.205:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-378 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:33:32.384: INFO: >>> kubeConfig: /root/.kube/config I0312 19:33:32.404036 6 log.go:172] (0xc0021a62c0) (0xc0021dab40) Create stream I0312 19:33:32.404054 6 log.go:172] (0xc0021a62c0) (0xc0021dab40) Stream added, broadcasting: 1 I0312 19:33:32.407496 6 log.go:172] (0xc0021a62c0) Reply frame received for 1 I0312 19:33:32.407531 6 log.go:172] (0xc0021a62c0) (0xc0021dabe0) Create stream I0312 19:33:32.407544 6 log.go:172] (0xc0021a62c0) (0xc0021dabe0) Stream added, broadcasting: 3 I0312 19:33:32.409544 6 log.go:172] (0xc0021a62c0) Reply frame received for 3 I0312 19:33:32.409567 6 log.go:172] (0xc0021a62c0) (0xc0021dac80) Create stream I0312 19:33:32.409574 6 log.go:172] (0xc0021a62c0) (0xc0021dac80) Stream added, broadcasting: 5 I0312 19:33:32.410108 6 log.go:172] (0xc0021a62c0) Reply frame received for 5 I0312 19:33:32.471391 6 log.go:172] (0xc0021a62c0) Data frame received for 3 I0312 19:33:32.471411 6 log.go:172] (0xc0021dabe0) (3) Data frame handling I0312 19:33:32.471425 6 log.go:172] (0xc0021dabe0) (3) Data frame sent I0312 19:33:32.471430 6 log.go:172] (0xc0021a62c0) Data frame received for 3 I0312 19:33:32.471437 6 log.go:172] (0xc0021dabe0) (3) Data frame handling I0312 19:33:32.471663 6 log.go:172] (0xc0021a62c0) Data frame received for 5 I0312 19:33:32.471688 6 log.go:172] (0xc0021dac80) (5) Data frame handling I0312 19:33:32.473366 6 log.go:172] (0xc0021a62c0) Data frame received for 1 I0312 19:33:32.473381 6 log.go:172] (0xc0021dab40) (1) Data frame handling I0312 19:33:32.473393 6 log.go:172] (0xc0021dab40) (1) Data frame sent I0312 19:33:32.473403 6 log.go:172] (0xc0021a62c0) (0xc0021dab40) Stream removed, broadcasting: 1 I0312 19:33:32.473416 6 log.go:172] (0xc0021a62c0) Go away received I0312 19:33:32.473567 6 log.go:172] (0xc0021a62c0) (0xc0021dab40) Stream removed, broadcasting: 1 I0312 19:33:32.473593 6 log.go:172] (0xc0021a62c0) (0xc0021dabe0) Stream removed, broadcasting: 3 I0312 19:33:32.473608 6 log.go:172] (0xc0021a62c0) (0xc0021dac80) Stream removed, broadcasting: 5 Mar 12 19:33:32.473: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:33:32.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-378" for this suite. • [SLOW TEST:20.479 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2101,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:33:32.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:33:32.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 12 19:33:32.640: INFO: stderr: "" Mar 12 19:33:32.640: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-03-12T18:45:36Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:33:32.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3587" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":129,"skipped":2108,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:33:32.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3711 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3711 STEP: Creating statefulset with conflicting port in namespace statefulset-3711 STEP: Waiting until pod test-pod will start running in namespace statefulset-3711 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3711 Mar 12 19:33:36.805: INFO: Observed stateful pod in namespace: statefulset-3711, name: ss-0, uid: 2ba81088-deb8-4743-85ed-a52aa53c20bf, status phase: Pending. Waiting for statefulset controller to delete. Mar 12 19:33:37.343: INFO: Observed stateful pod in namespace: statefulset-3711, name: ss-0, uid: 2ba81088-deb8-4743-85ed-a52aa53c20bf, status phase: Failed. Waiting for statefulset controller to delete. Mar 12 19:33:37.360: INFO: Observed stateful pod in namespace: statefulset-3711, name: ss-0, uid: 2ba81088-deb8-4743-85ed-a52aa53c20bf, status phase: Failed. Waiting for statefulset controller to delete. Mar 12 19:33:37.388: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3711 STEP: Removing pod with conflicting port in namespace statefulset-3711 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3711 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 19:33:41.436: INFO: Deleting all statefulset in ns statefulset-3711 Mar 12 19:33:41.438: INFO: Scaling statefulset ss to 0 Mar 12 19:33:51.454: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 19:33:51.456: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:33:51.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3711" for this suite. • [SLOW TEST:18.795 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":130,"skipped":2118,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:33:51.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9339.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9339.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9339.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9339.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9339.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9339.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 19:33:55.598: INFO: DNS probes using dns-9339/dns-test-afb229ff-fcf2-4ec2-a0b0-4c3582c4c820 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:33:55.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9339" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":131,"skipped":2138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:33:55.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 12 19:33:55.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 12 19:33:55.793: INFO: stderr: "" Mar 12 19:33:55.793: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:33:55.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5867" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":132,"skipped":2164,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:33:55.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 12 19:33:55.890: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:34:00.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9559" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":133,"skipped":2184,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:34:00.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:34:00.292: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86ce19ee-c0a2-41c4-8d9f-a326add98160" in namespace "downward-api-9377" to be "success or failure" Mar 12 19:34:00.295: INFO: Pod "downwardapi-volume-86ce19ee-c0a2-41c4-8d9f-a326add98160": Phase="Pending", Reason="", readiness=false. Elapsed: 3.047742ms Mar 12 19:34:02.299: INFO: Pod "downwardapi-volume-86ce19ee-c0a2-41c4-8d9f-a326add98160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006433502s STEP: Saw pod success Mar 12 19:34:02.299: INFO: Pod "downwardapi-volume-86ce19ee-c0a2-41c4-8d9f-a326add98160" satisfied condition "success or failure" Mar 12 19:34:02.301: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-86ce19ee-c0a2-41c4-8d9f-a326add98160 container client-container: STEP: delete the pod Mar 12 19:34:02.364: INFO: Waiting for pod downwardapi-volume-86ce19ee-c0a2-41c4-8d9f-a326add98160 to disappear Mar 12 19:34:02.367: INFO: Pod downwardapi-volume-86ce19ee-c0a2-41c4-8d9f-a326add98160 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:34:02.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9377" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2193,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:34:02.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 12 19:34:02.429: INFO: Waiting up to 5m0s for pod "client-containers-66ba386d-fcc2-43fa-a9ce-0e854642ce86" in namespace "containers-9526" to be "success or failure" Mar 12 19:34:02.433: INFO: Pod "client-containers-66ba386d-fcc2-43fa-a9ce-0e854642ce86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159266ms Mar 12 19:34:04.436: INFO: Pod "client-containers-66ba386d-fcc2-43fa-a9ce-0e854642ce86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007167457s STEP: Saw pod success Mar 12 19:34:04.436: INFO: Pod "client-containers-66ba386d-fcc2-43fa-a9ce-0e854642ce86" satisfied condition "success or failure" Mar 12 19:34:04.438: INFO: Trying to get logs from node jerma-worker2 pod client-containers-66ba386d-fcc2-43fa-a9ce-0e854642ce86 container test-container: STEP: delete the pod Mar 12 19:34:04.492: INFO: Waiting for pod client-containers-66ba386d-fcc2-43fa-a9ce-0e854642ce86 to disappear Mar 12 19:34:04.505: INFO: Pod client-containers-66ba386d-fcc2-43fa-a9ce-0e854642ce86 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:34:04.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9526" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:34:04.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-9395 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9395 to expose endpoints map[] Mar 12 19:34:04.638: INFO: Get endpoints failed (9.992924ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 12 19:34:05.641: INFO: successfully validated that service endpoint-test2 in namespace services-9395 exposes endpoints map[] (1.012798382s elapsed) STEP: Creating pod pod1 in namespace services-9395 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9395 to expose endpoints map[pod1:[80]] Mar 12 19:34:07.676: INFO: successfully validated that service endpoint-test2 in namespace services-9395 exposes endpoints map[pod1:[80]] (2.030434427s elapsed) STEP: Creating pod pod2 in namespace services-9395 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9395 to expose endpoints map[pod1:[80] pod2:[80]] Mar 12 19:34:09.721: INFO: successfully validated that service endpoint-test2 in namespace services-9395 exposes endpoints map[pod1:[80] pod2:[80]] (2.041116448s elapsed) STEP: Deleting pod pod1 in namespace services-9395 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9395 to expose endpoints map[pod2:[80]] Mar 12 19:34:09.763: INFO: successfully validated that service endpoint-test2 in namespace services-9395 exposes endpoints map[pod2:[80]] (38.540736ms elapsed) STEP: Deleting pod pod2 in namespace services-9395 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9395 to expose endpoints map[] Mar 12 19:34:09.775: INFO: successfully validated that service endpoint-test2 in namespace services-9395 exposes endpoints map[] (9.439709ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:34:09.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9395" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:5.354 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":136,"skipped":2241,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:34:09.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:34:10.267: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 19:34:12.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638450, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638450, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638450, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638450, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:34:15.326: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:34:15.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1145" for this suite. STEP: Destroying namespace "webhook-1145-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.647 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":137,"skipped":2242,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:34:15.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:34:17.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9613" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2259,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:34:17.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:34:18.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 19:34:20.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638458, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638458, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638458, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638458, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:34:23.415: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:34:23.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2130" for this suite. STEP: Destroying namespace "webhook-2130-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.297 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":139,"skipped":2264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:34:23.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:34:24.005: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bedfcaf-c1fd-497f-8082-346c6f67c397" in namespace "downward-api-1829" to be "success or failure" Mar 12 19:34:24.029: INFO: Pod "downwardapi-volume-1bedfcaf-c1fd-497f-8082-346c6f67c397": Phase="Pending", Reason="", readiness=false. Elapsed: 24.265885ms Mar 12 19:34:26.032: INFO: Pod "downwardapi-volume-1bedfcaf-c1fd-497f-8082-346c6f67c397": Phase="Running", Reason="", readiness=true. Elapsed: 2.02765309s Mar 12 19:34:28.036: INFO: Pod "downwardapi-volume-1bedfcaf-c1fd-497f-8082-346c6f67c397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031216073s STEP: Saw pod success Mar 12 19:34:28.036: INFO: Pod "downwardapi-volume-1bedfcaf-c1fd-497f-8082-346c6f67c397" satisfied condition "success or failure" Mar 12 19:34:28.039: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1bedfcaf-c1fd-497f-8082-346c6f67c397 container client-container: STEP: delete the pod Mar 12 19:34:28.069: INFO: Waiting for pod downwardapi-volume-1bedfcaf-c1fd-497f-8082-346c6f67c397 to disappear Mar 12 19:34:28.075: INFO: Pod downwardapi-volume-1bedfcaf-c1fd-497f-8082-346c6f67c397 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:34:28.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1829" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2311,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:34:28.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:34:41.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5690" for this suite. • [SLOW TEST:13.206 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":141,"skipped":2327,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:34:41.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:34:49.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3432" for this suite. • [SLOW TEST:8.072 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":142,"skipped":2333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:34:49.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 12 19:34:49.449: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 12 19:35:00.064: INFO: >>> kubeConfig: /root/.kube/config Mar 12 19:35:01.925: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:35:12.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2150" for this suite. • [SLOW TEST:22.661 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":143,"skipped":2360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:35:12.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 12 19:35:12.095: INFO: Waiting up to 5m0s for pod "pod-3573f9fb-980f-48c7-a7a7-7a2dba92ca2e" in namespace "emptydir-138" to be "success or failure" Mar 12 19:35:12.106: INFO: Pod "pod-3573f9fb-980f-48c7-a7a7-7a2dba92ca2e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.559929ms Mar 12 19:35:14.109: INFO: Pod "pod-3573f9fb-980f-48c7-a7a7-7a2dba92ca2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014143646s STEP: Saw pod success Mar 12 19:35:14.109: INFO: Pod "pod-3573f9fb-980f-48c7-a7a7-7a2dba92ca2e" satisfied condition "success or failure" Mar 12 19:35:14.112: INFO: Trying to get logs from node jerma-worker pod pod-3573f9fb-980f-48c7-a7a7-7a2dba92ca2e container test-container: STEP: delete the pod Mar 12 19:35:14.150: INFO: Waiting for pod pod-3573f9fb-980f-48c7-a7a7-7a2dba92ca2e to disappear Mar 12 19:35:14.172: INFO: Pod pod-3573f9fb-980f-48c7-a7a7-7a2dba92ca2e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:35:14.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-138" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2390,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:35:14.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:35:18.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4923" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2392,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:35:18.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-e27f35d1-5057-47ed-890f-6cdfc32bc1ef STEP: Creating a pod to test consume secrets Mar 12 19:35:18.351: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2a4d7ead-52d8-4862-b492-4b0c43d225ec" in namespace "projected-4216" to be "success or failure" Mar 12 19:35:18.367: INFO: Pod "pod-projected-secrets-2a4d7ead-52d8-4862-b492-4b0c43d225ec": Phase="Pending", Reason="", readiness=false. Elapsed: 16.268318ms Mar 12 19:35:20.371: INFO: Pod "pod-projected-secrets-2a4d7ead-52d8-4862-b492-4b0c43d225ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019838106s Mar 12 19:35:22.374: INFO: Pod "pod-projected-secrets-2a4d7ead-52d8-4862-b492-4b0c43d225ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023374254s STEP: Saw pod success Mar 12 19:35:22.374: INFO: Pod "pod-projected-secrets-2a4d7ead-52d8-4862-b492-4b0c43d225ec" satisfied condition "success or failure" Mar 12 19:35:22.376: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-2a4d7ead-52d8-4862-b492-4b0c43d225ec container secret-volume-test: STEP: delete the pod Mar 12 19:35:22.401: INFO: Waiting for pod pod-projected-secrets-2a4d7ead-52d8-4862-b492-4b0c43d225ec to disappear Mar 12 19:35:22.407: INFO: Pod pod-projected-secrets-2a4d7ead-52d8-4862-b492-4b0c43d225ec no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:35:22.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4216" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2395,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:35:22.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 12 19:35:22.486: INFO: Waiting up to 5m0s for pod "pod-8c2b0eed-4523-4ca3-9bc1-2cbf6cb70cd2" in namespace "emptydir-8594" to be "success or failure" Mar 12 19:35:22.491: INFO: Pod "pod-8c2b0eed-4523-4ca3-9bc1-2cbf6cb70cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.929918ms Mar 12 19:35:24.494: INFO: Pod "pod-8c2b0eed-4523-4ca3-9bc1-2cbf6cb70cd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008079548s STEP: Saw pod success Mar 12 19:35:24.494: INFO: Pod "pod-8c2b0eed-4523-4ca3-9bc1-2cbf6cb70cd2" satisfied condition "success or failure" Mar 12 19:35:24.496: INFO: Trying to get logs from node jerma-worker2 pod pod-8c2b0eed-4523-4ca3-9bc1-2cbf6cb70cd2 container test-container: STEP: delete the pod Mar 12 19:35:24.525: INFO: Waiting for pod pod-8c2b0eed-4523-4ca3-9bc1-2cbf6cb70cd2 to disappear Mar 12 19:35:24.532: INFO: Pod pod-8c2b0eed-4523-4ca3-9bc1-2cbf6cb70cd2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:35:24.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8594" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2399,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:35:24.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:35:46.677: INFO: Container started at 2020-03-12 19:35:25 +0000 UTC, pod became ready at 2020-03-12 19:35:46 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:35:46.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3357" for this suite. • [SLOW TEST:22.144 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2412,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:35:46.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:35:47.504: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 19:35:49.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638547, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638547, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638547, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638547, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:35:52.564: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:35:52.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5388" for this suite. STEP: Destroying namespace "webhook-5388-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.113 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":149,"skipped":2430,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:35:52.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:35:52.864: INFO: Creating deployment "webserver-deployment" Mar 12 19:35:52.867: INFO: Waiting for observed generation 1 Mar 12 19:35:54.921: INFO: Waiting for all required pods to come up Mar 12 19:35:54.925: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 12 19:35:58.938: INFO: Waiting for deployment "webserver-deployment" to complete Mar 12 19:35:58.941: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 12 19:35:58.944: INFO: Updating deployment webserver-deployment Mar 12 19:35:58.944: INFO: Waiting for observed generation 2 Mar 12 19:36:00.953: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 12 19:36:00.955: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 12 19:36:00.957: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 12 19:36:00.963: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 12 19:36:00.963: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 12 19:36:00.964: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 12 19:36:00.968: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 12 19:36:00.968: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 12 19:36:00.972: INFO: Updating deployment webserver-deployment Mar 12 19:36:00.972: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 12 19:36:01.027: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 12 19:36:03.203: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 12 19:36:03.207: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-429 /apis/apps/v1/namespaces/deployment-429/deployments/webserver-deployment 9905e8c6-0793-4bca-b97b-cac13fb5fbd1 1208801 3 2020-03-12 19:35:52 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0058632b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-12 19:36:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-12 19:36:01 +0000 UTC,LastTransitionTime:2020-03-12 19:35:52 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 12 19:36:03.209: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-429 /apis/apps/v1/namespaces/deployment-429/replicasets/webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 1208797 3 2020-03-12 19:35:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 9905e8c6-0793-4bca-b97b-cac13fb5fbd1 0xc004073857 0xc004073858}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040738c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 19:36:03.209: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 12 19:36:03.209: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-429 /apis/apps/v1/namespaces/deployment-429/replicasets/webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 1208783 3 2020-03-12 19:35:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 9905e8c6-0793-4bca-b97b-cac13fb5fbd1 0xc004073797 0xc004073798}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040737f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 12 19:36:03.213: INFO: Pod "webserver-deployment-595b5b9587-426g9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-426g9 webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-426g9 7328f5af-2769-49f9-92b7-6f08cfbef169 1208822 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413ac57 0xc00413ac58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.213: INFO: Pod "webserver-deployment-595b5b9587-4sj9l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4sj9l webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-4sj9l f7270614-63e4-459b-a75d-83061e5714c8 1208828 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413adb7 0xc00413adb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.213: INFO: Pod "webserver-deployment-595b5b9587-5zq6x" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5zq6x webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-5zq6x 1a4e0db7-8618-4f6f-b6b5-d73e4a9a45ff 1208851 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413af17 0xc00413af18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.213: INFO: Pod "webserver-deployment-595b5b9587-6652m" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6652m webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-6652m 8a9be348-37c5-40fa-84a4-9825d3c1b3c2 1208826 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413b077 0xc00413b078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.213: INFO: Pod "webserver-deployment-595b5b9587-799dc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-799dc webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-799dc 0cfd013a-15b8-4552-ad53-10a446f4f1b9 1208800 0 2020-03-12 19:36:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413b1d7 0xc00413b1d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.214: INFO: Pod "webserver-deployment-595b5b9587-7qtjr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7qtjr webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-7qtjr af8468d9-940f-4dc6-a108-729cc5f5be36 1208806 0 2020-03-12 19:36:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413b347 0xc00413b348}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.214: INFO: Pod "webserver-deployment-595b5b9587-89hcv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-89hcv webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-89hcv 21b7bc01-b377-400e-accb-fac9790589df 1208837 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413b4a7 0xc00413b4a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.214: INFO: Pod "webserver-deployment-595b5b9587-8p9f7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8p9f7 webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-8p9f7 59d42167-2d50-48a2-ad7e-94857a2d210c 1208591 0 2020-03-12 19:35:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413b607 0xc00413b608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.223,StartTime:2020-03-12 19:35:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:35:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2ab6780d1d2e48a831561d4890128924c4fdf9b58e21ad5e5e5a449e296704b2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.214: INFO: Pod "webserver-deployment-595b5b9587-c55w8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c55w8 webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-c55w8 74339bf1-5be7-425d-9b77-84cd526f8c5a 1208595 0 2020-03-12 19:35:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413b797 0xc00413b798}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.226,StartTime:2020-03-12 19:35:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:35:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8fe41d9be10950d0fda119aec9ed7e1ec46381d587401ab6ee8bd3995aa55a9f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.214: INFO: Pod "webserver-deployment-595b5b9587-jb7v6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jb7v6 webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-jb7v6 b640b899-a1d8-4165-a34f-8b0bc9ca5df3 1208777 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413b917 0xc00413b918}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.214: INFO: Pod "webserver-deployment-595b5b9587-lkjrk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lkjrk webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-lkjrk 82ae79ae-2cdb-44a9-accf-a02d34ff0e4a 1208622 0 2020-03-12 19:35:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413ba37 0xc00413ba38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.227,StartTime:2020-03-12 19:35:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:35:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0ff38aa059bbde68079bccce8d90aa5790ac967df287be391c1d53ca6a17d638,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.214: INFO: Pod "webserver-deployment-595b5b9587-q462h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q462h webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-q462h ccc2c54e-5be0-420e-9c2f-d4a0ff93910e 1208627 0 2020-03-12 19:35:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413bbb7 0xc00413bbb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.228,StartTime:2020-03-12 19:35:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:35:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://963fb3854bd2c7f74f0639d4ce5e5ea99eeb63479996bd170b29e233d25c85d5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.215: INFO: Pod "webserver-deployment-595b5b9587-rsgsf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rsgsf webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-rsgsf 7112f754-8a6c-4caa-9d97-94d5c8e6d1c0 1208608 0 2020-03-12 19:35:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413bd37 0xc00413bd38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.227,StartTime:2020-03-12 19:35:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:35:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://623ad46095c0be9ebdd0fc178f37c992be5842d6b64581e63ab1e4d77577d410,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.215: INFO: Pod "webserver-deployment-595b5b9587-rsqt7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rsqt7 webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-rsqt7 754470be-7011-46c9-a719-7be47628ab01 1208804 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc00413beb7 0xc00413beb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.215: INFO: Pod "webserver-deployment-595b5b9587-sjdgr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sjdgr webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-sjdgr 1b62006e-5255-4720-9e9d-94f726cffe47 1208818 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc003a86047 0xc003a86048}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.215: INFO: Pod "webserver-deployment-595b5b9587-t6r5b" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t6r5b webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-t6r5b dd51493c-c65b-4394-bbf8-aa909d60b939 1208602 0 2020-03-12 19:35:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc003a861a7 0xc003a861a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.224,StartTime:2020-03-12 19:35:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:35:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2924fc2d5d310c9f121ca3223af99b089cd2756d344823f4a0e8610343a24009,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.215: INFO: Pod "webserver-deployment-595b5b9587-thrb2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-thrb2 webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-thrb2 d1556de2-b91f-4f7d-a8b2-8a7507a460dc 1208765 0 2020-03-12 19:36:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc003a86327 0xc003a86328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.215: INFO: Pod "webserver-deployment-595b5b9587-wd7ml" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wd7ml webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-wd7ml b0069eb1-6125-40c5-bfbb-a731b7244e93 1208605 0 2020-03-12 19:35:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc003a86487 0xc003a86488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.224,StartTime:2020-03-12 19:35:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:35:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://019c54a45b67c913026813088cc06a4158256adf7df73799aff4b10dbe3c2e5d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.216: INFO: Pod "webserver-deployment-595b5b9587-wgwv8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wgwv8 webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-wgwv8 77bbb0d1-65c9-42bd-8650-b6d91f325757 1208809 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc003a86607 0xc003a86608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.216: INFO: Pod "webserver-deployment-595b5b9587-wkpfm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wkpfm webserver-deployment-595b5b9587- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-595b5b9587-wkpfm df7f24de-dd2b-4a58-af57-7f6ed4e43011 1208589 0 2020-03-12 19:35:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 acca3eb1-1430-47c5-ba48-dac66c4501a0 0xc003a86767 0xc003a86768}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.225,StartTime:2020-03-12 19:35:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:35:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://42c006ffafddc4a0fdef11fcad4900ed0880f4d5f5cb1ac18032d87c70884a39,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.216: INFO: Pod "webserver-deployment-c7997dcc8-bbhp8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bbhp8 webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-bbhp8 c2adecce-a6f3-4814-92c1-51a609ecd361 1208866 0 2020-03-12 19:35:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a868e7 0xc003a868e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.229,StartTime:2020-03-12 19:35:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.216: INFO: Pod "webserver-deployment-c7997dcc8-ctgp4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ctgp4 webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-ctgp4 1bdfd43a-5de1-4891-9712-0e0ba0979eef 1208814 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a86aa7 0xc003a86aa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.216: INFO: Pod "webserver-deployment-c7997dcc8-d9sq7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d9sq7 webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-d9sq7 784d7edd-7699-446f-9aa9-62c3e3cdbbdf 1208856 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a86c27 0xc003a86c28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.216: INFO: Pod "webserver-deployment-c7997dcc8-dqb8g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dqb8g webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-dqb8g 41a6aa8d-c174-4919-a30c-416365b3164c 1208795 0 2020-03-12 19:36:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a86da7 0xc003a86da8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.216: INFO: Pod "webserver-deployment-c7997dcc8-hpqhh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hpqhh webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-hpqhh 1d7fc281-a16b-4433-87ff-5b43af8fa526 1208838 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a86f27 0xc003a86f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.216: INFO: Pod "webserver-deployment-c7997dcc8-lpjqh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lpjqh webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-lpjqh fd05905c-d08f-4794-a6ff-a5709bb737ec 1208858 0 2020-03-12 19:35:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a870a7 0xc003a870a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.228,StartTime:2020-03-12 19:35:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.217: INFO: Pod "webserver-deployment-c7997dcc8-lq44q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lq44q webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-lq44q 082a8e93-2ee8-4f3d-b69f-08f21a9c0d56 1208811 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a87257 0xc003a87258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.217: INFO: Pod "webserver-deployment-c7997dcc8-rqkm6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rqkm6 webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-rqkm6 8222851c-f405-4cd7-bbcd-b3087b23cfb6 1208683 0 2020-03-12 19:35:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a873d7 0xc003a873d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:35:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.217: INFO: Pod "webserver-deployment-c7997dcc8-wvl8p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wvl8p webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-wvl8p 4f84053c-acd2-428d-b162-b8a3ab2f4f52 1208709 0 2020-03-12 19:35:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a87557 0xc003a87558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:35:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.217: INFO: Pod "webserver-deployment-c7997dcc8-zjlrs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zjlrs webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-zjlrs c6535061-68a3-45e5-af6d-538b6cdb1b81 1208852 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a876e7 0xc003a876e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.217: INFO: Pod "webserver-deployment-c7997dcc8-zk27l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zk27l webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-zk27l d6acab4e-5ff0-4afb-9501-71d85bfd865b 1208706 0 2020-03-12 19:35:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a87867 0xc003a87868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:35:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:35:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.217: INFO: Pod "webserver-deployment-c7997dcc8-zl4xc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zl4xc webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-zl4xc 7f11523a-9a8b-49a5-b696-c13a2c6e1b25 1208865 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a879e7 0xc003a879e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 19:36:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 19:36:03.217: INFO: Pod "webserver-deployment-c7997dcc8-zrxgv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zrxgv webserver-deployment-c7997dcc8- deployment-429 /api/v1/namespaces/deployment-429/pods/webserver-deployment-c7997dcc8-zrxgv 4022e140-3006-433e-819d-7fa3dde88c69 1208789 0 2020-03-12 19:36:01 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 160bc7e1-6948-464f-b6aa-ecac340dcedb 0xc003a87b67 0xc003a87b68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cb76j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cb76j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cb76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:36:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:36:03.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-429" for this suite. • [SLOW TEST:10.425 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":150,"skipped":2446,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:36:03.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:36:04.023: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 19:36:06.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638564, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638564, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638564, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638563, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 19:36:08.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638564, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638564, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638564, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638563, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 19:36:10.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638564, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638564, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638564, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638563, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:36:13.061: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:36:13.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1034" for this suite. STEP: Destroying namespace "webhook-1034-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.065 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":151,"skipped":2450,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:36:13.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:36:13.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3402" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":152,"skipped":2460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:36:13.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 12 19:36:13.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2345' Mar 12 19:36:13.874: INFO: stderr: "" Mar 12 19:36:13.874: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 19:36:13.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2345' Mar 12 19:36:13.973: INFO: stderr: "" Mar 12 19:36:13.973: INFO: stdout: "update-demo-nautilus-25crq update-demo-nautilus-zrsx7 " Mar 12 19:36:13.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25crq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:14.057: INFO: stderr: "" Mar 12 19:36:14.057: INFO: stdout: "" Mar 12 19:36:14.057: INFO: update-demo-nautilus-25crq is created but not running Mar 12 19:36:19.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2345' Mar 12 19:36:19.165: INFO: stderr: "" Mar 12 19:36:19.165: INFO: stdout: "update-demo-nautilus-25crq update-demo-nautilus-zrsx7 " Mar 12 19:36:19.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25crq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:19.251: INFO: stderr: "" Mar 12 19:36:19.251: INFO: stdout: "true" Mar 12 19:36:19.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25crq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:19.320: INFO: stderr: "" Mar 12 19:36:19.320: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 19:36:19.320: INFO: validating pod update-demo-nautilus-25crq Mar 12 19:36:19.323: INFO: got data: { "image": "nautilus.jpg" } Mar 12 19:36:19.323: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 19:36:19.323: INFO: update-demo-nautilus-25crq is verified up and running Mar 12 19:36:19.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zrsx7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:19.389: INFO: stderr: "" Mar 12 19:36:19.389: INFO: stdout: "true" Mar 12 19:36:19.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zrsx7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:19.461: INFO: stderr: "" Mar 12 19:36:19.461: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 19:36:19.461: INFO: validating pod update-demo-nautilus-zrsx7 Mar 12 19:36:19.463: INFO: got data: { "image": "nautilus.jpg" } Mar 12 19:36:19.463: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 19:36:19.463: INFO: update-demo-nautilus-zrsx7 is verified up and running STEP: scaling down the replication controller Mar 12 19:36:19.465: INFO: scanned /root for discovery docs: Mar 12 19:36:19.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2345' Mar 12 19:36:20.577: INFO: stderr: "" Mar 12 19:36:20.577: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 19:36:20.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2345' Mar 12 19:36:20.659: INFO: stderr: "" Mar 12 19:36:20.659: INFO: stdout: "update-demo-nautilus-25crq update-demo-nautilus-zrsx7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 12 19:36:25.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2345' Mar 12 19:36:25.769: INFO: stderr: "" Mar 12 19:36:25.769: INFO: stdout: "update-demo-nautilus-25crq update-demo-nautilus-zrsx7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 12 19:36:30.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2345' Mar 12 19:36:30.882: INFO: stderr: "" Mar 12 19:36:30.882: INFO: stdout: "update-demo-nautilus-25crq " Mar 12 19:36:30.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25crq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:30.980: INFO: stderr: "" Mar 12 19:36:30.980: INFO: stdout: "true" Mar 12 19:36:30.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25crq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:31.060: INFO: stderr: "" Mar 12 19:36:31.060: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 19:36:31.060: INFO: validating pod update-demo-nautilus-25crq Mar 12 19:36:31.063: INFO: got data: { "image": "nautilus.jpg" } Mar 12 19:36:31.063: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 19:36:31.063: INFO: update-demo-nautilus-25crq is verified up and running STEP: scaling up the replication controller Mar 12 19:36:31.065: INFO: scanned /root for discovery docs: Mar 12 19:36:31.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2345' Mar 12 19:36:32.164: INFO: stderr: "" Mar 12 19:36:32.164: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 19:36:32.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2345' Mar 12 19:36:32.233: INFO: stderr: "" Mar 12 19:36:32.233: INFO: stdout: "update-demo-nautilus-25crq update-demo-nautilus-wjpvd " Mar 12 19:36:32.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25crq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:32.295: INFO: stderr: "" Mar 12 19:36:32.295: INFO: stdout: "true" Mar 12 19:36:32.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25crq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:32.358: INFO: stderr: "" Mar 12 19:36:32.358: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 19:36:32.358: INFO: validating pod update-demo-nautilus-25crq Mar 12 19:36:32.360: INFO: got data: { "image": "nautilus.jpg" } Mar 12 19:36:32.360: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 19:36:32.360: INFO: update-demo-nautilus-25crq is verified up and running Mar 12 19:36:32.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjpvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:32.421: INFO: stderr: "" Mar 12 19:36:32.421: INFO: stdout: "" Mar 12 19:36:32.421: INFO: update-demo-nautilus-wjpvd is created but not running Mar 12 19:36:37.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2345' Mar 12 19:36:37.539: INFO: stderr: "" Mar 12 19:36:37.539: INFO: stdout: "update-demo-nautilus-25crq update-demo-nautilus-wjpvd " Mar 12 19:36:37.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25crq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:37.647: INFO: stderr: "" Mar 12 19:36:37.647: INFO: stdout: "true" Mar 12 19:36:37.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25crq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:37.716: INFO: stderr: "" Mar 12 19:36:37.716: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 19:36:37.716: INFO: validating pod update-demo-nautilus-25crq Mar 12 19:36:37.718: INFO: got data: { "image": "nautilus.jpg" } Mar 12 19:36:37.719: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 19:36:37.719: INFO: update-demo-nautilus-25crq is verified up and running Mar 12 19:36:37.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjpvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:37.784: INFO: stderr: "" Mar 12 19:36:37.784: INFO: stdout: "true" Mar 12 19:36:37.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjpvd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2345' Mar 12 19:36:37.849: INFO: stderr: "" Mar 12 19:36:37.849: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 19:36:37.849: INFO: validating pod update-demo-nautilus-wjpvd Mar 12 19:36:37.852: INFO: got data: { "image": "nautilus.jpg" } Mar 12 19:36:37.852: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 19:36:37.852: INFO: update-demo-nautilus-wjpvd is verified up and running STEP: using delete to clean up resources Mar 12 19:36:37.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2345' Mar 12 19:36:37.930: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 19:36:37.930: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 12 19:36:37.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2345' Mar 12 19:36:38.017: INFO: stderr: "No resources found in kubectl-2345 namespace.\n" Mar 12 19:36:38.017: INFO: stdout: "" Mar 12 19:36:38.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2345 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 19:36:38.084: INFO: stderr: "" Mar 12 19:36:38.084: INFO: stdout: "update-demo-nautilus-25crq\nupdate-demo-nautilus-wjpvd\n" Mar 12 19:36:38.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2345' Mar 12 19:36:38.694: INFO: stderr: "No resources found in kubectl-2345 namespace.\n" Mar 12 19:36:38.694: INFO: stdout: "" Mar 12 19:36:38.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2345 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 19:36:38.772: INFO: stderr: "" Mar 12 19:36:38.772: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:36:38.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2345" for this suite. • [SLOW TEST:25.416 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":153,"skipped":2554,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:36:38.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 19:36:38.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2671' Mar 12 19:36:38.945: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 19:36:38.945: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 12 19:36:38.950: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 12 19:36:38.975: INFO: scanned /root for discovery docs: Mar 12 19:36:38.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2671' Mar 12 19:36:54.840: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 12 19:36:54.840: INFO: stdout: "Created e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0\nScaling up e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 12 19:36:54.840: INFO: stdout: "Created e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0\nScaling up e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 12 19:36:54.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2671' Mar 12 19:36:54.918: INFO: stderr: "" Mar 12 19:36:54.918: INFO: stdout: "e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0-hr8mv e2e-test-httpd-rc-gnnzf " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Mar 12 19:36:59.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2671' Mar 12 19:37:00.020: INFO: stderr: "" Mar 12 19:37:00.020: INFO: stdout: "e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0-hr8mv " Mar 12 19:37:00.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0-hr8mv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2671' Mar 12 19:37:00.116: INFO: stderr: "" Mar 12 19:37:00.116: INFO: stdout: "true" Mar 12 19:37:00.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0-hr8mv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2671' Mar 12 19:37:00.206: INFO: stderr: "" Mar 12 19:37:00.206: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 12 19:37:00.206: INFO: e2e-test-httpd-rc-e260c0c5817fbc639c530fadf89d0ed0-hr8mv is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698 Mar 12 19:37:00.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2671' Mar 12 19:37:00.286: INFO: stderr: "" Mar 12 19:37:00.286: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:37:00.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2671" for this suite. • [SLOW TEST:21.557 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":154,"skipped":2619,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:37:00.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:37:04.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1358" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":155,"skipped":2623,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:37:04.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-ljzk STEP: Creating a pod to test atomic-volume-subpath Mar 12 19:37:04.866: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ljzk" in namespace "subpath-2677" to be "success or failure" Mar 12 19:37:04.871: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303256ms Mar 12 19:37:06.874: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Running", Reason="", readiness=true. Elapsed: 2.008208887s Mar 12 19:37:08.878: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Running", Reason="", readiness=true. Elapsed: 4.012116983s Mar 12 19:37:10.881: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Running", Reason="", readiness=true. Elapsed: 6.015226187s Mar 12 19:37:12.885: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Running", Reason="", readiness=true. Elapsed: 8.018367203s Mar 12 19:37:14.888: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Running", Reason="", readiness=true. Elapsed: 10.02162831s Mar 12 19:37:16.891: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Running", Reason="", readiness=true. Elapsed: 12.024492236s Mar 12 19:37:18.897: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Running", Reason="", readiness=true. Elapsed: 14.031137947s Mar 12 19:37:20.901: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Running", Reason="", readiness=true. Elapsed: 16.034961083s Mar 12 19:37:22.905: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Running", Reason="", readiness=true. Elapsed: 18.039010097s Mar 12 19:37:24.910: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Running", Reason="", readiness=true. Elapsed: 20.044180395s Mar 12 19:37:26.914: INFO: Pod "pod-subpath-test-configmap-ljzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.047938549s STEP: Saw pod success Mar 12 19:37:26.914: INFO: Pod "pod-subpath-test-configmap-ljzk" satisfied condition "success or failure" Mar 12 19:37:26.917: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-ljzk container test-container-subpath-configmap-ljzk: STEP: delete the pod Mar 12 19:37:26.946: INFO: Waiting for pod pod-subpath-test-configmap-ljzk to disappear Mar 12 19:37:26.957: INFO: Pod pod-subpath-test-configmap-ljzk no longer exists STEP: Deleting pod pod-subpath-test-configmap-ljzk Mar 12 19:37:26.957: INFO: Deleting pod "pod-subpath-test-configmap-ljzk" in namespace "subpath-2677" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:37:26.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2677" for this suite. • [SLOW TEST:22.175 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":156,"skipped":2641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:37:26.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 12 19:37:27.051: INFO: Waiting up to 5m0s for pod "pod-cefb56f9-6330-4387-a18e-31ea6a773f95" in namespace "emptydir-2234" to be "success or failure" Mar 12 19:37:27.056: INFO: Pod "pod-cefb56f9-6330-4387-a18e-31ea6a773f95": Phase="Pending", Reason="", readiness=false. Elapsed: 5.683812ms Mar 12 19:37:29.108: INFO: Pod "pod-cefb56f9-6330-4387-a18e-31ea6a773f95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057652858s STEP: Saw pod success Mar 12 19:37:29.108: INFO: Pod "pod-cefb56f9-6330-4387-a18e-31ea6a773f95" satisfied condition "success or failure" Mar 12 19:37:29.112: INFO: Trying to get logs from node jerma-worker pod pod-cefb56f9-6330-4387-a18e-31ea6a773f95 container test-container: STEP: delete the pod Mar 12 19:37:29.136: INFO: Waiting for pod pod-cefb56f9-6330-4387-a18e-31ea6a773f95 to disappear Mar 12 19:37:29.149: INFO: Pod pod-cefb56f9-6330-4387-a18e-31ea6a773f95 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:37:29.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2234" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2683,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:37:29.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 12 19:37:29.241: INFO: >>> kubeConfig: /root/.kube/config Mar 12 19:37:31.020: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:37:40.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6895" for this suite. • [SLOW TEST:11.143 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":158,"skipped":2696,"failed":0} [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:37:40.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 12 19:37:40.335: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 12 19:37:40.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1055' Mar 12 19:37:42.168: INFO: stderr: "" Mar 12 19:37:42.168: INFO: stdout: "service/agnhost-slave created\n" Mar 12 19:37:42.168: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 12 19:37:42.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1055' Mar 12 19:37:42.436: INFO: stderr: "" Mar 12 19:37:42.436: INFO: stdout: "service/agnhost-master created\n" Mar 12 19:37:42.436: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 12 19:37:42.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1055' Mar 12 19:37:42.654: INFO: stderr: "" Mar 12 19:37:42.654: INFO: stdout: "service/frontend created\n" Mar 12 19:37:42.654: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 12 19:37:42.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1055' Mar 12 19:37:42.859: INFO: stderr: "" Mar 12 19:37:42.859: INFO: stdout: "deployment.apps/frontend created\n" Mar 12 19:37:42.860: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 12 19:37:42.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1055' Mar 12 19:37:43.131: INFO: stderr: "" Mar 12 19:37:43.131: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 12 19:37:43.131: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 12 19:37:43.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1055' Mar 12 19:37:43.384: INFO: stderr: "" Mar 12 19:37:43.384: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 12 19:37:43.384: INFO: Waiting for all frontend pods to be Running. Mar 12 19:37:48.435: INFO: Waiting for frontend to serve content. Mar 12 19:37:48.448: INFO: Trying to add a new entry to the guestbook. Mar 12 19:37:48.457: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 12 19:37:48.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1055' Mar 12 19:37:48.600: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 19:37:48.600: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 12 19:37:48.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1055' Mar 12 19:37:48.707: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 19:37:48.707: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 12 19:37:48.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1055' Mar 12 19:37:48.847: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 19:37:48.847: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 12 19:37:48.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1055' Mar 12 19:37:48.937: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 19:37:48.937: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 12 19:37:48.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1055' Mar 12 19:37:49.015: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 19:37:49.015: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 12 19:37:49.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1055' Mar 12 19:37:49.105: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 19:37:49.105: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:37:49.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1055" for this suite. • [SLOW TEST:8.809 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":159,"skipped":2696,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:37:49.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 19:37:53.189: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:37:53.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8921" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2703,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:37:53.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 19:37:53.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6631' Mar 12 19:37:53.379: INFO: stderr: "" Mar 12 19:37:53.379: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 12 19:37:58.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6631 -o json' Mar 12 19:37:58.529: INFO: stderr: "" Mar 12 19:37:58.529: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-12T19:37:53Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6631\",\n \"resourceVersion\": \"1210121\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6631/pods/e2e-test-httpd-pod\",\n \"uid\": \"b5edef26-aeb2-477b-ad9b-97de0d970ca3\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-h796r\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-h796r\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-h796r\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T19:37:53Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T19:37:55Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T19:37:55Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T19:37:53Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://34f062c3c1db77d3bb9581663869dfcceddde092826f948556b73a98d41342a3\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-12T19:37:54Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.251\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.251\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-12T19:37:53Z\"\n }\n}\n" STEP: replace the image in the pod Mar 12 19:37:58.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6631' Mar 12 19:37:58.770: INFO: stderr: "" Mar 12 19:37:58.770: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902 Mar 12 19:37:58.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6631' Mar 12 19:38:00.871: INFO: stderr: "" Mar 12 19:38:00.871: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:38:00.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6631" for this suite. • [SLOW TEST:7.657 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":161,"skipped":2711,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:38:00.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:38:01.534: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:38:04.642: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:38:04.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:38:05.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6140" for this suite. STEP: Destroying namespace "webhook-6140-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.041 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":162,"skipped":2726,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:38:05.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:38:06.071: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ab59543-b7b3-4582-b4c7-320c7bd0b514" in namespace "downward-api-4252" to be "success or failure" Mar 12 19:38:06.083: INFO: Pod "downwardapi-volume-7ab59543-b7b3-4582-b4c7-320c7bd0b514": Phase="Pending", Reason="", readiness=false. Elapsed: 12.001966ms Mar 12 19:38:08.089: INFO: Pod "downwardapi-volume-7ab59543-b7b3-4582-b4c7-320c7bd0b514": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01774599s STEP: Saw pod success Mar 12 19:38:08.089: INFO: Pod "downwardapi-volume-7ab59543-b7b3-4582-b4c7-320c7bd0b514" satisfied condition "success or failure" Mar 12 19:38:08.092: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7ab59543-b7b3-4582-b4c7-320c7bd0b514 container client-container: STEP: delete the pod Mar 12 19:38:08.120: INFO: Waiting for pod downwardapi-volume-7ab59543-b7b3-4582-b4c7-320c7bd0b514 to disappear Mar 12 19:38:08.125: INFO: Pod downwardapi-volume-7ab59543-b7b3-4582-b4c7-320c7bd0b514 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:38:08.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4252" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2742,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:38:08.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-f03d35cd-fa98-404a-b1a9-f99bd84fd70e STEP: Creating the pod STEP: Updating configmap configmap-test-upd-f03d35cd-fa98-404a-b1a9-f99bd84fd70e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:38:12.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1718" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2760,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:38:12.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-c1f8fa66-2bd3-4855-b2bf-4fb15df1e20e STEP: Creating a pod to test consume configMaps Mar 12 19:38:12.342: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e253c738-38fe-4f71-9b94-b8a556b41264" in namespace "projected-6901" to be "success or failure" Mar 12 19:38:12.347: INFO: Pod "pod-projected-configmaps-e253c738-38fe-4f71-9b94-b8a556b41264": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320844ms Mar 12 19:38:14.350: INFO: Pod "pod-projected-configmaps-e253c738-38fe-4f71-9b94-b8a556b41264": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00740613s STEP: Saw pod success Mar 12 19:38:14.350: INFO: Pod "pod-projected-configmaps-e253c738-38fe-4f71-9b94-b8a556b41264" satisfied condition "success or failure" Mar 12 19:38:14.352: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e253c738-38fe-4f71-9b94-b8a556b41264 container projected-configmap-volume-test: STEP: delete the pod Mar 12 19:38:14.406: INFO: Waiting for pod pod-projected-configmaps-e253c738-38fe-4f71-9b94-b8a556b41264 to disappear Mar 12 19:38:14.409: INFO: Pod pod-projected-configmaps-e253c738-38fe-4f71-9b94-b8a556b41264 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:38:14.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6901" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2778,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:38:14.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-521878c4-35ab-4416-9d4b-ce6124dee11a STEP: Creating a pod to test consume configMaps Mar 12 19:38:14.464: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-82e37866-6ea7-4edc-8bf2-9d225966c5eb" in namespace "projected-7288" to be "success or failure" Mar 12 19:38:14.481: INFO: Pod "pod-projected-configmaps-82e37866-6ea7-4edc-8bf2-9d225966c5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.803226ms Mar 12 19:38:16.483: INFO: Pod "pod-projected-configmaps-82e37866-6ea7-4edc-8bf2-9d225966c5eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019069725s STEP: Saw pod success Mar 12 19:38:16.483: INFO: Pod "pod-projected-configmaps-82e37866-6ea7-4edc-8bf2-9d225966c5eb" satisfied condition "success or failure" Mar 12 19:38:16.485: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-82e37866-6ea7-4edc-8bf2-9d225966c5eb container projected-configmap-volume-test: STEP: delete the pod Mar 12 19:38:16.504: INFO: Waiting for pod pod-projected-configmaps-82e37866-6ea7-4edc-8bf2-9d225966c5eb to disappear Mar 12 19:38:16.510: INFO: Pod pod-projected-configmaps-82e37866-6ea7-4edc-8bf2-9d225966c5eb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:38:16.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7288" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2842,"failed":0} SSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:38:16.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9313 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9313;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9313 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9313;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9313.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9313.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9313.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9313.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9313.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9313.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9313.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9313.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9313.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9313.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9313.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 142.87.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.87.142_udp@PTR;check="$$(dig +tcp +noall +answer +search 142.87.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.87.142_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9313 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9313;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9313 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9313;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9313.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9313.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9313.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9313.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9313.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9313.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9313.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9313.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9313.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9313.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9313.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9313.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 142.87.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.87.142_udp@PTR;check="$$(dig +tcp +noall +answer +search 142.87.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.87.142_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 19:38:20.687: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.690: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.696: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.699: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.701: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.703: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.705: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.723: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.725: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.727: INFO: Unable to read jessie_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.730: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.732: INFO: Unable to read jessie_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.735: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.737: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.739: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:20.752: INFO: Lookups using dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9313 wheezy_tcp@dns-test-service.dns-9313 wheezy_udp@dns-test-service.dns-9313.svc wheezy_tcp@dns-test-service.dns-9313.svc wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9313 jessie_tcp@dns-test-service.dns-9313 jessie_udp@dns-test-service.dns-9313.svc jessie_tcp@dns-test-service.dns-9313.svc jessie_udp@_http._tcp.dns-test-service.dns-9313.svc jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc] Mar 12 19:38:25.756: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.759: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.761: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.764: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.766: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.768: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.771: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.773: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.788: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.789: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.792: INFO: Unable to read jessie_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.839: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.842: INFO: Unable to read jessie_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.853: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.855: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:25.872: INFO: Lookups using dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9313 wheezy_tcp@dns-test-service.dns-9313 wheezy_udp@dns-test-service.dns-9313.svc wheezy_tcp@dns-test-service.dns-9313.svc wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9313 jessie_tcp@dns-test-service.dns-9313 jessie_udp@dns-test-service.dns-9313.svc jessie_tcp@dns-test-service.dns-9313.svc jessie_udp@_http._tcp.dns-test-service.dns-9313.svc jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc] Mar 12 19:38:30.757: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.760: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.763: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.766: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.769: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.771: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.774: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.778: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.798: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.800: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.803: INFO: Unable to read jessie_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.806: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.808: INFO: Unable to read jessie_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.810: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.813: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.815: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:30.830: INFO: Lookups using dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9313 wheezy_tcp@dns-test-service.dns-9313 wheezy_udp@dns-test-service.dns-9313.svc wheezy_tcp@dns-test-service.dns-9313.svc wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9313 jessie_tcp@dns-test-service.dns-9313 jessie_udp@dns-test-service.dns-9313.svc jessie_tcp@dns-test-service.dns-9313.svc jessie_udp@_http._tcp.dns-test-service.dns-9313.svc jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc] Mar 12 19:38:35.757: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.761: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.764: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.767: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.770: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.777: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.780: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.800: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.803: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.806: INFO: Unable to read jessie_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.809: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.811: INFO: Unable to read jessie_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.814: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.817: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.822: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:35.837: INFO: Lookups using dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9313 wheezy_tcp@dns-test-service.dns-9313 wheezy_udp@dns-test-service.dns-9313.svc wheezy_tcp@dns-test-service.dns-9313.svc wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9313 jessie_tcp@dns-test-service.dns-9313 jessie_udp@dns-test-service.dns-9313.svc jessie_tcp@dns-test-service.dns-9313.svc jessie_udp@_http._tcp.dns-test-service.dns-9313.svc jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc] Mar 12 19:38:40.757: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.760: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.763: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.766: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.768: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.771: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.774: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.776: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.793: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.796: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.799: INFO: Unable to read jessie_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.801: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.803: INFO: Unable to read jessie_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.805: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.807: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.810: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:40.827: INFO: Lookups using dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9313 wheezy_tcp@dns-test-service.dns-9313 wheezy_udp@dns-test-service.dns-9313.svc wheezy_tcp@dns-test-service.dns-9313.svc wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9313 jessie_tcp@dns-test-service.dns-9313 jessie_udp@dns-test-service.dns-9313.svc jessie_tcp@dns-test-service.dns-9313.svc jessie_udp@_http._tcp.dns-test-service.dns-9313.svc jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc] Mar 12 19:38:45.756: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.759: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.761: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.764: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.766: INFO: Unable to read wheezy_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.768: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.770: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.772: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.788: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.790: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.792: INFO: Unable to read jessie_udp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.794: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313 from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.796: INFO: Unable to read jessie_udp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.798: INFO: Unable to read jessie_tcp@dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.800: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.802: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc from pod dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51: the server could not find the requested resource (get pods dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51) Mar 12 19:38:45.816: INFO: Lookups using dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9313 wheezy_tcp@dns-test-service.dns-9313 wheezy_udp@dns-test-service.dns-9313.svc wheezy_tcp@dns-test-service.dns-9313.svc wheezy_udp@_http._tcp.dns-test-service.dns-9313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9313 jessie_tcp@dns-test-service.dns-9313 jessie_udp@dns-test-service.dns-9313.svc jessie_tcp@dns-test-service.dns-9313.svc jessie_udp@_http._tcp.dns-test-service.dns-9313.svc jessie_tcp@_http._tcp.dns-test-service.dns-9313.svc] Mar 12 19:38:50.833: INFO: DNS probes using dns-9313/dns-test-9713e4f8-e8f1-4d2a-b7af-61e33d784a51 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:38:50.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9313" for this suite. • [SLOW TEST:34.496 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":167,"skipped":2845,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:38:51.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5869.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5869.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 19:38:55.129: INFO: DNS probes using dns-test-4a62a958-7133-41f0-8082-c4026009a9df succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5869.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5869.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 19:38:59.223: INFO: File wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local from pod dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 19:38:59.226: INFO: File jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local from pod dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 19:38:59.226: INFO: Lookups using dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 failed for: [wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local] Mar 12 19:39:04.231: INFO: File wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local from pod dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 19:39:04.235: INFO: File jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local from pod dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 19:39:04.235: INFO: Lookups using dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 failed for: [wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local] Mar 12 19:39:09.231: INFO: File wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local from pod dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 19:39:09.235: INFO: File jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local from pod dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 19:39:09.235: INFO: Lookups using dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 failed for: [wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local] Mar 12 19:39:14.231: INFO: File wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local from pod dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 19:39:14.234: INFO: File jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local from pod dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 19:39:14.234: INFO: Lookups using dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 failed for: [wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local] Mar 12 19:39:19.231: INFO: File wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local from pod dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 19:39:19.234: INFO: File jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local from pod dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 19:39:19.234: INFO: Lookups using dns-5869/dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 failed for: [wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local] Mar 12 19:39:24.232: INFO: DNS probes using dns-test-ab349960-2cf1-4791-a8c0-72758b4c5bb7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5869.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5869.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5869.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5869.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 19:39:28.375: INFO: DNS probes using dns-test-17a6b8ce-81ec-408c-b662-b11cb1ebdb4c succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:39:28.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5869" for this suite. • [SLOW TEST:37.438 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":168,"skipped":2848,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:39:28.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 12 19:39:28.545: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7929 /api/v1/namespaces/watch-7929/configmaps/e2e-watch-test-label-changed 5ae0fbd0-978b-4f85-b9ca-a1b791ff9b69 1210735 0 2020-03-12 19:39:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 12 19:39:28.545: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7929 /api/v1/namespaces/watch-7929/configmaps/e2e-watch-test-label-changed 5ae0fbd0-978b-4f85-b9ca-a1b791ff9b69 1210736 0 2020-03-12 19:39:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 12 19:39:28.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7929 /api/v1/namespaces/watch-7929/configmaps/e2e-watch-test-label-changed 5ae0fbd0-978b-4f85-b9ca-a1b791ff9b69 1210737 0 2020-03-12 19:39:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 12 19:39:38.600: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7929 /api/v1/namespaces/watch-7929/configmaps/e2e-watch-test-label-changed 5ae0fbd0-978b-4f85-b9ca-a1b791ff9b69 1210814 0 2020-03-12 19:39:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 12 19:39:38.600: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7929 /api/v1/namespaces/watch-7929/configmaps/e2e-watch-test-label-changed 5ae0fbd0-978b-4f85-b9ca-a1b791ff9b69 1210815 0 2020-03-12 19:39:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 12 19:39:38.600: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7929 /api/v1/namespaces/watch-7929/configmaps/e2e-watch-test-label-changed 5ae0fbd0-978b-4f85-b9ca-a1b791ff9b69 1210816 0 2020-03-12 19:39:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:39:38.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7929" for this suite. • [SLOW TEST:10.158 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":169,"skipped":2857,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:39:38.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 12 19:39:38.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2230' Mar 12 19:39:39.026: INFO: stderr: "" Mar 12 19:39:39.026: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 19:39:39.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2230' Mar 12 19:39:39.142: INFO: stderr: "" Mar 12 19:39:39.142: INFO: stdout: "update-demo-nautilus-hfj9b update-demo-nautilus-xj2qn " Mar 12 19:39:39.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hfj9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2230' Mar 12 19:39:39.222: INFO: stderr: "" Mar 12 19:39:39.222: INFO: stdout: "" Mar 12 19:39:39.222: INFO: update-demo-nautilus-hfj9b is created but not running Mar 12 19:39:44.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2230' Mar 12 19:39:44.335: INFO: stderr: "" Mar 12 19:39:44.335: INFO: stdout: "update-demo-nautilus-hfj9b update-demo-nautilus-xj2qn " Mar 12 19:39:44.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hfj9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2230' Mar 12 19:39:44.409: INFO: stderr: "" Mar 12 19:39:44.409: INFO: stdout: "true" Mar 12 19:39:44.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hfj9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2230' Mar 12 19:39:44.486: INFO: stderr: "" Mar 12 19:39:44.486: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 19:39:44.486: INFO: validating pod update-demo-nautilus-hfj9b Mar 12 19:39:44.489: INFO: got data: { "image": "nautilus.jpg" } Mar 12 19:39:44.489: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 19:39:44.489: INFO: update-demo-nautilus-hfj9b is verified up and running Mar 12 19:39:44.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xj2qn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2230' Mar 12 19:39:44.562: INFO: stderr: "" Mar 12 19:39:44.562: INFO: stdout: "true" Mar 12 19:39:44.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xj2qn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2230' Mar 12 19:39:44.631: INFO: stderr: "" Mar 12 19:39:44.631: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 19:39:44.631: INFO: validating pod update-demo-nautilus-xj2qn Mar 12 19:39:44.634: INFO: got data: { "image": "nautilus.jpg" } Mar 12 19:39:44.634: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 19:39:44.634: INFO: update-demo-nautilus-xj2qn is verified up and running STEP: using delete to clean up resources Mar 12 19:39:44.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2230' Mar 12 19:39:44.705: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 19:39:44.705: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 12 19:39:44.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2230' Mar 12 19:39:44.772: INFO: stderr: "No resources found in kubectl-2230 namespace.\n" Mar 12 19:39:44.772: INFO: stdout: "" Mar 12 19:39:44.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2230 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 19:39:44.879: INFO: stderr: "" Mar 12 19:39:44.879: INFO: stdout: "update-demo-nautilus-hfj9b\nupdate-demo-nautilus-xj2qn\n" Mar 12 19:39:45.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2230' Mar 12 19:39:45.477: INFO: stderr: "No resources found in kubectl-2230 namespace.\n" Mar 12 19:39:45.477: INFO: stdout: "" Mar 12 19:39:45.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2230 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 19:39:45.559: INFO: stderr: "" Mar 12 19:39:45.559: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:39:45.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2230" for this suite. • [SLOW TEST:6.956 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":170,"skipped":2873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:39:45.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2873.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2873.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2873.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 19:39:49.684: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:49.687: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:49.690: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:49.692: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:49.700: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:49.704: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:49.706: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:49.709: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:49.713: INFO: Lookups using dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local] Mar 12 19:39:54.718: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:54.721: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:54.723: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:54.726: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:54.735: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:54.737: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:54.740: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:54.743: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:54.748: INFO: Lookups using dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local] Mar 12 19:39:59.718: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:59.721: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:59.723: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:59.726: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:59.736: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:59.739: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:59.743: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:59.746: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:39:59.753: INFO: Lookups using dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local] Mar 12 19:40:04.718: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:04.722: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:04.725: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:04.728: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:04.737: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:04.740: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:04.742: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:04.744: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:04.750: INFO: Lookups using dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local] Mar 12 19:40:09.718: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:09.722: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:09.725: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:09.728: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:09.738: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:09.741: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:09.743: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:09.746: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:09.752: INFO: Lookups using dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local] Mar 12 19:40:14.718: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:14.721: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:14.725: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:14.728: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:14.737: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:14.740: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:14.742: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:14.745: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local from pod dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1: the server could not find the requested resource (get pods dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1) Mar 12 19:40:14.750: INFO: Lookups using dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2873.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2873.svc.cluster.local jessie_udp@dns-test-service-2.dns-2873.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2873.svc.cluster.local] Mar 12 19:40:19.750: INFO: DNS probes using dns-2873/dns-test-53e2b94c-4e13-437e-a6b2-4926b17f3db1 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:40:19.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2873" for this suite. • [SLOW TEST:34.341 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":171,"skipped":2914,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:40:19.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 12 19:40:20.005: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9558 /api/v1/namespaces/watch-9558/configmaps/e2e-watch-test-watch-closed adc32df2-19c1-4df6-a1c0-acfbd3ab0267 1211037 0 2020-03-12 19:40:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 12 19:40:20.005: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9558 /api/v1/namespaces/watch-9558/configmaps/e2e-watch-test-watch-closed adc32df2-19c1-4df6-a1c0-acfbd3ab0267 1211038 0 2020-03-12 19:40:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 12 19:40:20.040: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9558 /api/v1/namespaces/watch-9558/configmaps/e2e-watch-test-watch-closed adc32df2-19c1-4df6-a1c0-acfbd3ab0267 1211039 0 2020-03-12 19:40:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 12 19:40:20.040: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9558 /api/v1/namespaces/watch-9558/configmaps/e2e-watch-test-watch-closed adc32df2-19c1-4df6-a1c0-acfbd3ab0267 1211040 0 2020-03-12 19:40:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:40:20.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9558" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":172,"skipped":2916,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:40:20.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 12 19:40:20.124: INFO: PodSpec: initContainers in spec.initContainers Mar 12 19:41:08.109: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a1241b38-2185-47e8-a5ac-b853e543a9cf", GenerateName:"", Namespace:"init-container-7028", SelfLink:"/api/v1/namespaces/init-container-7028/pods/pod-init-a1241b38-2185-47e8-a5ac-b853e543a9cf", UID:"3c0681c7-ae06-4535-8b98-ceb130a7f2cf", ResourceVersion:"1211236", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719638820, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"124022280"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9lqpv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00520e9c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9lqpv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9lqpv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9lqpv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003760928), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003bca4e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037609c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037609e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0037609e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0037609ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638820, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638820, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638820, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719638820, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.252", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.252"}}, StartTime:(*v1.Time)(0xc003c791c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00236be30)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00236bea0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://dadcdd900dc88ea87e3c00c0aae0345f4860d16a7b9eb6198e92383a2d216ca5", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003c79200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003c791e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003760a6f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:41:08.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7028" for this suite. • [SLOW TEST:48.067 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":173,"skipped":2917,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:41:08.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-677d9e4e-bfa2-4c3b-a674-678bf097dda0 STEP: Creating a pod to test consume configMaps Mar 12 19:41:08.173: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bdec3da3-d986-46f8-86a9-8c05209ea44c" in namespace "projected-1334" to be "success or failure" Mar 12 19:41:08.210: INFO: Pod "pod-projected-configmaps-bdec3da3-d986-46f8-86a9-8c05209ea44c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.186927ms Mar 12 19:41:10.214: INFO: Pod "pod-projected-configmaps-bdec3da3-d986-46f8-86a9-8c05209ea44c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040615668s STEP: Saw pod success Mar 12 19:41:10.214: INFO: Pod "pod-projected-configmaps-bdec3da3-d986-46f8-86a9-8c05209ea44c" satisfied condition "success or failure" Mar 12 19:41:10.216: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-bdec3da3-d986-46f8-86a9-8c05209ea44c container projected-configmap-volume-test: STEP: delete the pod Mar 12 19:41:10.245: INFO: Waiting for pod pod-projected-configmaps-bdec3da3-d986-46f8-86a9-8c05209ea44c to disappear Mar 12 19:41:10.249: INFO: Pod pod-projected-configmaps-bdec3da3-d986-46f8-86a9-8c05209ea44c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:41:10.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1334" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2926,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:41:10.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-87f9fe69-3bfc-42d7-9d58-7cff2120c86d STEP: Creating a pod to test consume secrets Mar 12 19:41:10.360: INFO: Waiting up to 5m0s for pod "pod-secrets-5c8562a3-5700-4d63-b726-dd2d4c001a3e" in namespace "secrets-4933" to be "success or failure" Mar 12 19:41:10.363: INFO: Pod "pod-secrets-5c8562a3-5700-4d63-b726-dd2d4c001a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.263757ms Mar 12 19:41:12.366: INFO: Pod "pod-secrets-5c8562a3-5700-4d63-b726-dd2d4c001a3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006366281s STEP: Saw pod success Mar 12 19:41:12.366: INFO: Pod "pod-secrets-5c8562a3-5700-4d63-b726-dd2d4c001a3e" satisfied condition "success or failure" Mar 12 19:41:12.368: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-5c8562a3-5700-4d63-b726-dd2d4c001a3e container secret-volume-test: STEP: delete the pod Mar 12 19:41:12.395: INFO: Waiting for pod pod-secrets-5c8562a3-5700-4d63-b726-dd2d4c001a3e to disappear Mar 12 19:41:12.397: INFO: Pod pod-secrets-5c8562a3-5700-4d63-b726-dd2d4c001a3e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:41:12.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4933" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2945,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:41:12.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:41:12.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dade930e-32d3-4b6e-9f05-bc86039b42ce" in namespace "downward-api-6661" to be "success or failure" Mar 12 19:41:12.492: INFO: Pod "downwardapi-volume-dade930e-32d3-4b6e-9f05-bc86039b42ce": Phase="Pending", Reason="", readiness=false. Elapsed: 9.603558ms Mar 12 19:41:14.495: INFO: Pod "downwardapi-volume-dade930e-32d3-4b6e-9f05-bc86039b42ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012715001s STEP: Saw pod success Mar 12 19:41:14.495: INFO: Pod "downwardapi-volume-dade930e-32d3-4b6e-9f05-bc86039b42ce" satisfied condition "success or failure" Mar 12 19:41:14.497: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dade930e-32d3-4b6e-9f05-bc86039b42ce container client-container: STEP: delete the pod Mar 12 19:41:14.541: INFO: Waiting for pod downwardapi-volume-dade930e-32d3-4b6e-9f05-bc86039b42ce to disappear Mar 12 19:41:14.546: INFO: Pod downwardapi-volume-dade930e-32d3-4b6e-9f05-bc86039b42ce no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:41:14.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6661" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2950,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:41:14.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:41:14.589: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 12 19:41:17.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8656 create -f -' Mar 12 19:41:19.278: INFO: stderr: "" Mar 12 19:41:19.278: INFO: stdout: "e2e-test-crd-publish-openapi-4724-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 12 19:41:19.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8656 delete e2e-test-crd-publish-openapi-4724-crds test-cr' Mar 12 19:41:19.393: INFO: stderr: "" Mar 12 19:41:19.393: INFO: stdout: "e2e-test-crd-publish-openapi-4724-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 12 19:41:19.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8656 apply -f -' Mar 12 19:41:19.592: INFO: stderr: "" Mar 12 19:41:19.592: INFO: stdout: "e2e-test-crd-publish-openapi-4724-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 12 19:41:19.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8656 delete e2e-test-crd-publish-openapi-4724-crds test-cr' Mar 12 19:41:19.701: INFO: stderr: "" Mar 12 19:41:19.701: INFO: stdout: "e2e-test-crd-publish-openapi-4724-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 12 19:41:19.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4724-crds' Mar 12 19:41:19.924: INFO: stderr: "" Mar 12 19:41:19.924: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4724-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:41:21.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8656" for this suite. • [SLOW TEST:7.150 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":177,"skipped":2972,"failed":0} SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:41:21.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-3e42d09c-da8f-481b-895c-1019ad8320c1 in namespace container-probe-1321 Mar 12 19:41:23.774: INFO: Started pod liveness-3e42d09c-da8f-481b-895c-1019ad8320c1 in namespace container-probe-1321 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 19:41:23.777: INFO: Initial restart count of pod liveness-3e42d09c-da8f-481b-895c-1019ad8320c1 is 0 Mar 12 19:41:35.805: INFO: Restart count of pod container-probe-1321/liveness-3e42d09c-da8f-481b-895c-1019ad8320c1 is now 1 (12.027919456s elapsed) Mar 12 19:41:55.845: INFO: Restart count of pod container-probe-1321/liveness-3e42d09c-da8f-481b-895c-1019ad8320c1 is now 2 (32.068281973s elapsed) Mar 12 19:42:15.902: INFO: Restart count of pod container-probe-1321/liveness-3e42d09c-da8f-481b-895c-1019ad8320c1 is now 3 (52.125694537s elapsed) Mar 12 19:42:35.952: INFO: Restart count of pod container-probe-1321/liveness-3e42d09c-da8f-481b-895c-1019ad8320c1 is now 4 (1m12.1756097s elapsed) Mar 12 19:43:48.193: INFO: Restart count of pod container-probe-1321/liveness-3e42d09c-da8f-481b-895c-1019ad8320c1 is now 5 (2m24.416156777s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:43:48.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1321" for this suite. • [SLOW TEST:146.537 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2975,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:43:48.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:43:48.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce924e7e-c456-4686-897c-b0c3167f2918" in namespace "downward-api-2075" to be "success or failure" Mar 12 19:43:48.308: INFO: Pod "downwardapi-volume-ce924e7e-c456-4686-897c-b0c3167f2918": Phase="Pending", Reason="", readiness=false. Elapsed: 21.24513ms Mar 12 19:43:50.311: INFO: Pod "downwardapi-volume-ce924e7e-c456-4686-897c-b0c3167f2918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02424414s Mar 12 19:43:52.315: INFO: Pod "downwardapi-volume-ce924e7e-c456-4686-897c-b0c3167f2918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027506078s STEP: Saw pod success Mar 12 19:43:52.315: INFO: Pod "downwardapi-volume-ce924e7e-c456-4686-897c-b0c3167f2918" satisfied condition "success or failure" Mar 12 19:43:52.317: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ce924e7e-c456-4686-897c-b0c3167f2918 container client-container: STEP: delete the pod Mar 12 19:43:52.340: INFO: Waiting for pod downwardapi-volume-ce924e7e-c456-4686-897c-b0c3167f2918 to disappear Mar 12 19:43:52.345: INFO: Pod downwardapi-volume-ce924e7e-c456-4686-897c-b0c3167f2918 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:43:52.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2075" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2988,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:43:52.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-80d10dc0-566a-4ace-ac05-362f98dd02e2 STEP: Creating a pod to test consume secrets Mar 12 19:43:52.401: INFO: Waiting up to 5m0s for pod "pod-secrets-ac8757fe-a58c-4712-8b5b-e641abae5e4e" in namespace "secrets-2913" to be "success or failure" Mar 12 19:43:52.439: INFO: Pod "pod-secrets-ac8757fe-a58c-4712-8b5b-e641abae5e4e": Phase="Pending", Reason="", readiness=false. Elapsed: 37.55506ms Mar 12 19:43:54.457: INFO: Pod "pod-secrets-ac8757fe-a58c-4712-8b5b-e641abae5e4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.055581478s STEP: Saw pod success Mar 12 19:43:54.457: INFO: Pod "pod-secrets-ac8757fe-a58c-4712-8b5b-e641abae5e4e" satisfied condition "success or failure" Mar 12 19:43:54.459: INFO: Trying to get logs from node jerma-worker pod pod-secrets-ac8757fe-a58c-4712-8b5b-e641abae5e4e container secret-volume-test: STEP: delete the pod Mar 12 19:43:54.478: INFO: Waiting for pod pod-secrets-ac8757fe-a58c-4712-8b5b-e641abae5e4e to disappear Mar 12 19:43:54.483: INFO: Pod pod-secrets-ac8757fe-a58c-4712-8b5b-e641abae5e4e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:43:54.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2913" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2989,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:43:54.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 12 19:43:54.524: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:44:09.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9049" for this suite. • [SLOW TEST:14.544 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":181,"skipped":2992,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:44:09.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-75e6ac06-33b8-4459-a9f7-860feef003df STEP: Creating a pod to test consume configMaps Mar 12 19:44:09.125: INFO: Waiting up to 5m0s for pod "pod-configmaps-b65fc265-a7e1-49dd-95d3-f9f9290e3435" in namespace "configmap-8065" to be "success or failure" Mar 12 19:44:09.132: INFO: Pod "pod-configmaps-b65fc265-a7e1-49dd-95d3-f9f9290e3435": Phase="Pending", Reason="", readiness=false. Elapsed: 7.146479ms Mar 12 19:44:11.135: INFO: Pod "pod-configmaps-b65fc265-a7e1-49dd-95d3-f9f9290e3435": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010691081s STEP: Saw pod success Mar 12 19:44:11.136: INFO: Pod "pod-configmaps-b65fc265-a7e1-49dd-95d3-f9f9290e3435" satisfied condition "success or failure" Mar 12 19:44:11.139: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b65fc265-a7e1-49dd-95d3-f9f9290e3435 container configmap-volume-test: STEP: delete the pod Mar 12 19:44:11.226: INFO: Waiting for pod pod-configmaps-b65fc265-a7e1-49dd-95d3-f9f9290e3435 to disappear Mar 12 19:44:11.244: INFO: Pod pod-configmaps-b65fc265-a7e1-49dd-95d3-f9f9290e3435 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:44:11.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8065" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3016,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:44:11.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-eb93eb9a-f73a-4935-86b7-aa9f73f9d8fa Mar 12 19:44:11.469: INFO: Pod name my-hostname-basic-eb93eb9a-f73a-4935-86b7-aa9f73f9d8fa: Found 0 pods out of 1 Mar 12 19:44:16.471: INFO: Pod name my-hostname-basic-eb93eb9a-f73a-4935-86b7-aa9f73f9d8fa: Found 1 pods out of 1 Mar 12 19:44:16.471: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-eb93eb9a-f73a-4935-86b7-aa9f73f9d8fa" are running Mar 12 19:44:16.473: INFO: Pod "my-hostname-basic-eb93eb9a-f73a-4935-86b7-aa9f73f9d8fa-ht5qd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 19:44:11 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 19:44:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 19:44:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 19:44:11 +0000 UTC Reason: Message:}]) Mar 12 19:44:16.473: INFO: Trying to dial the pod Mar 12 19:44:21.483: INFO: Controller my-hostname-basic-eb93eb9a-f73a-4935-86b7-aa9f73f9d8fa: Got expected result from replica 1 [my-hostname-basic-eb93eb9a-f73a-4935-86b7-aa9f73f9d8fa-ht5qd]: "my-hostname-basic-eb93eb9a-f73a-4935-86b7-aa9f73f9d8fa-ht5qd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:44:21.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3903" for this suite. • [SLOW TEST:10.238 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":183,"skipped":3020,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:44:21.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:44:21.546: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c30de277-2891-4d1f-98fa-4b0859b78a06" in namespace "projected-8789" to be "success or failure" Mar 12 19:44:21.550: INFO: Pod "downwardapi-volume-c30de277-2891-4d1f-98fa-4b0859b78a06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209744ms Mar 12 19:44:23.553: INFO: Pod "downwardapi-volume-c30de277-2891-4d1f-98fa-4b0859b78a06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007148462s STEP: Saw pod success Mar 12 19:44:23.553: INFO: Pod "downwardapi-volume-c30de277-2891-4d1f-98fa-4b0859b78a06" satisfied condition "success or failure" Mar 12 19:44:23.563: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c30de277-2891-4d1f-98fa-4b0859b78a06 container client-container: STEP: delete the pod Mar 12 19:44:23.592: INFO: Waiting for pod downwardapi-volume-c30de277-2891-4d1f-98fa-4b0859b78a06 to disappear Mar 12 19:44:23.599: INFO: Pod downwardapi-volume-c30de277-2891-4d1f-98fa-4b0859b78a06 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:44:23.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8789" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3029,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:44:23.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-2b0c9e2c-ff73-423f-b751-4575cfeb00d5 STEP: Creating a pod to test consume configMaps Mar 12 19:44:23.702: INFO: Waiting up to 5m0s for pod "pod-configmaps-831d8c47-11cf-4827-bab6-529eb13315b3" in namespace "configmap-2051" to be "success or failure" Mar 12 19:44:23.729: INFO: Pod "pod-configmaps-831d8c47-11cf-4827-bab6-529eb13315b3": Phase="Pending", Reason="", readiness=false. Elapsed: 26.702422ms Mar 12 19:44:25.732: INFO: Pod "pod-configmaps-831d8c47-11cf-4827-bab6-529eb13315b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.030481223s STEP: Saw pod success Mar 12 19:44:25.732: INFO: Pod "pod-configmaps-831d8c47-11cf-4827-bab6-529eb13315b3" satisfied condition "success or failure" Mar 12 19:44:25.735: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-831d8c47-11cf-4827-bab6-529eb13315b3 container configmap-volume-test: STEP: delete the pod Mar 12 19:44:25.751: INFO: Waiting for pod pod-configmaps-831d8c47-11cf-4827-bab6-529eb13315b3 to disappear Mar 12 19:44:25.755: INFO: Pod pod-configmaps-831d8c47-11cf-4827-bab6-529eb13315b3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:44:25.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2051" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3049,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:44:25.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:44:25.817: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e362e79-902e-4c87-8f4c-6541acbbcb86" in namespace "projected-3567" to be "success or failure" Mar 12 19:44:25.852: INFO: Pod "downwardapi-volume-8e362e79-902e-4c87-8f4c-6541acbbcb86": Phase="Pending", Reason="", readiness=false. Elapsed: 35.0549ms Mar 12 19:44:27.856: INFO: Pod "downwardapi-volume-8e362e79-902e-4c87-8f4c-6541acbbcb86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039329931s STEP: Saw pod success Mar 12 19:44:27.856: INFO: Pod "downwardapi-volume-8e362e79-902e-4c87-8f4c-6541acbbcb86" satisfied condition "success or failure" Mar 12 19:44:27.859: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8e362e79-902e-4c87-8f4c-6541acbbcb86 container client-container: STEP: delete the pod Mar 12 19:44:27.878: INFO: Waiting for pod downwardapi-volume-8e362e79-902e-4c87-8f4c-6541acbbcb86 to disappear Mar 12 19:44:27.918: INFO: Pod downwardapi-volume-8e362e79-902e-4c87-8f4c-6541acbbcb86 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:44:27.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3567" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3049,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:44:27.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:44:28.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6720" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":187,"skipped":3070,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:44:28.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 12 19:44:28.176: INFO: Waiting up to 5m0s for pod "downward-api-a78ee959-c43f-4f8e-a6f2-43eb7ee4a3be" in namespace "downward-api-2555" to be "success or failure" Mar 12 19:44:28.193: INFO: Pod "downward-api-a78ee959-c43f-4f8e-a6f2-43eb7ee4a3be": Phase="Pending", Reason="", readiness=false. Elapsed: 17.026384ms Mar 12 19:44:30.212: INFO: Pod "downward-api-a78ee959-c43f-4f8e-a6f2-43eb7ee4a3be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03557934s Mar 12 19:44:32.217: INFO: Pod "downward-api-a78ee959-c43f-4f8e-a6f2-43eb7ee4a3be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041253201s STEP: Saw pod success Mar 12 19:44:32.217: INFO: Pod "downward-api-a78ee959-c43f-4f8e-a6f2-43eb7ee4a3be" satisfied condition "success or failure" Mar 12 19:44:32.220: INFO: Trying to get logs from node jerma-worker2 pod downward-api-a78ee959-c43f-4f8e-a6f2-43eb7ee4a3be container dapi-container: STEP: delete the pod Mar 12 19:44:32.240: INFO: Waiting for pod downward-api-a78ee959-c43f-4f8e-a6f2-43eb7ee4a3be to disappear Mar 12 19:44:32.245: INFO: Pod downward-api-a78ee959-c43f-4f8e-a6f2-43eb7ee4a3be no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:44:32.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2555" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:44:32.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 12 19:44:36.360: INFO: &Pod{ObjectMeta:{send-events-20e2a8ae-f744-4d4c-ac3b-9babf47a3fdf events-19 /api/v1/namespaces/events-19/pods/send-events-20e2a8ae-f744-4d4c-ac3b-9babf47a3fdf 2e73f07c-bbf8-4800-8d2a-e7aaa639d2d3 1212236 0 2020-03-12 19:44:32 +0000 UTC map[name:foo time:314616699] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfmmv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfmmv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfmmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:44:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:44:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:44:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:44:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.9,StartTime:2020-03-12 19:44:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:44:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9075f8c421ede68974925268d3e43680a0ae5c6e14c934ee8d7e3c123b8de7bb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 12 19:44:38.364: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 12 19:44:40.368: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:44:40.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-19" for this suite. • [SLOW TEST:8.135 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":189,"skipped":3117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:44:40.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 12 19:44:40.427: INFO: Waiting up to 5m0s for pod "downward-api-3f7c76f8-fe20-43fe-b79d-f5042f106de7" in namespace "downward-api-5188" to be "success or failure" Mar 12 19:44:40.458: INFO: Pod "downward-api-3f7c76f8-fe20-43fe-b79d-f5042f106de7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.217328ms Mar 12 19:44:42.462: INFO: Pod "downward-api-3f7c76f8-fe20-43fe-b79d-f5042f106de7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034545239s STEP: Saw pod success Mar 12 19:44:42.462: INFO: Pod "downward-api-3f7c76f8-fe20-43fe-b79d-f5042f106de7" satisfied condition "success or failure" Mar 12 19:44:42.464: INFO: Trying to get logs from node jerma-worker2 pod downward-api-3f7c76f8-fe20-43fe-b79d-f5042f106de7 container dapi-container: STEP: delete the pod Mar 12 19:44:42.492: INFO: Waiting for pod downward-api-3f7c76f8-fe20-43fe-b79d-f5042f106de7 to disappear Mar 12 19:44:42.522: INFO: Pod downward-api-3f7c76f8-fe20-43fe-b79d-f5042f106de7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:44:42.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5188" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3154,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:44:42.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:44:42.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 12 19:44:43.224: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T19:44:43Z generation:1 name:name1 resourceVersion:1212300 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ac3bc873-c649-4ecf-965b-ae14c0d6f327] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 12 19:44:53.229: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T19:44:53Z generation:1 name:name2 resourceVersion:1212345 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:80c168a3-b10f-450f-80ca-f41ad42a6284] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 12 19:45:03.236: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T19:44:43Z generation:2 name:name1 resourceVersion:1212375 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ac3bc873-c649-4ecf-965b-ae14c0d6f327] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 12 19:45:13.242: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T19:44:53Z generation:2 name:name2 resourceVersion:1212408 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:80c168a3-b10f-450f-80ca-f41ad42a6284] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 12 19:45:23.248: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T19:44:43Z generation:2 name:name1 resourceVersion:1212440 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ac3bc873-c649-4ecf-965b-ae14c0d6f327] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 12 19:45:33.255: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T19:44:53Z generation:2 name:name2 resourceVersion:1212470 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:80c168a3-b10f-450f-80ca-f41ad42a6284] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:45:43.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-391" for this suite. • [SLOW TEST:61.246 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":191,"skipped":3154,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:45:43.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:45:43.836: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 12 19:45:43.843: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:43.871: INFO: Number of nodes with available pods: 0 Mar 12 19:45:43.871: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:45:44.889: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:44.892: INFO: Number of nodes with available pods: 0 Mar 12 19:45:44.892: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:45:45.875: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:45.878: INFO: Number of nodes with available pods: 2 Mar 12 19:45:45.878: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 12 19:45:45.907: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:45.907: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:45.909: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:46.914: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:46.914: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:46.918: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:47.913: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:47.914: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:47.916: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:48.913: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:48.913: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:48.913: INFO: Pod daemon-set-plsmt is not available Mar 12 19:45:48.916: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:49.912: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:49.912: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:49.912: INFO: Pod daemon-set-plsmt is not available Mar 12 19:45:49.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:50.913: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:50.913: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:50.913: INFO: Pod daemon-set-plsmt is not available Mar 12 19:45:50.917: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:51.914: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:51.914: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:51.914: INFO: Pod daemon-set-plsmt is not available Mar 12 19:45:51.917: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:52.914: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:52.914: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:52.914: INFO: Pod daemon-set-plsmt is not available Mar 12 19:45:52.918: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:53.913: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:53.913: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:53.913: INFO: Pod daemon-set-plsmt is not available Mar 12 19:45:53.918: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:54.914: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:54.914: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:54.914: INFO: Pod daemon-set-plsmt is not available Mar 12 19:45:54.917: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:55.914: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:55.914: INFO: Wrong image for pod: daemon-set-plsmt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:55.914: INFO: Pod daemon-set-plsmt is not available Mar 12 19:45:55.917: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:56.922: INFO: Pod daemon-set-7vp4b is not available Mar 12 19:45:56.922: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:56.930: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:57.913: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:57.916: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:58.914: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:58.914: INFO: Pod daemon-set-j26mq is not available Mar 12 19:45:58.919: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:45:59.914: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:45:59.914: INFO: Pod daemon-set-j26mq is not available Mar 12 19:45:59.918: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:46:00.913: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:46:00.913: INFO: Pod daemon-set-j26mq is not available Mar 12 19:46:00.916: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:46:01.913: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:46:01.913: INFO: Pod daemon-set-j26mq is not available Mar 12 19:46:01.917: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:46:02.931: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:46:02.931: INFO: Pod daemon-set-j26mq is not available Mar 12 19:46:02.935: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:46:03.913: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:46:03.913: INFO: Pod daemon-set-j26mq is not available Mar 12 19:46:03.918: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:46:04.937: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:46:04.937: INFO: Pod daemon-set-j26mq is not available Mar 12 19:46:04.941: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:46:05.914: INFO: Wrong image for pod: daemon-set-j26mq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 19:46:05.914: INFO: Pod daemon-set-j26mq is not available Mar 12 19:46:05.918: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:46:06.912: INFO: Pod daemon-set-t296b is not available Mar 12 19:46:06.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 12 19:46:06.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:46:06.917: INFO: Number of nodes with available pods: 1 Mar 12 19:46:06.917: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 19:46:07.921: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:46:07.924: INFO: Number of nodes with available pods: 2 Mar 12 19:46:07.924: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5409, will wait for the garbage collector to delete the pods Mar 12 19:46:07.994: INFO: Deleting DaemonSet.extensions daemon-set took: 6.42148ms Mar 12 19:46:08.295: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.200445ms Mar 12 19:46:16.101: INFO: Number of nodes with available pods: 0 Mar 12 19:46:16.101: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 19:46:16.103: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5409/daemonsets","resourceVersion":"1212684"},"items":null} Mar 12 19:46:16.105: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5409/pods","resourceVersion":"1212684"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:46:16.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5409" for this suite. • [SLOW TEST:32.346 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":192,"skipped":3172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:46:16.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 12 19:46:16.147: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 19:46:16.181: INFO: Waiting for terminating namespaces to be deleted... Mar 12 19:46:16.183: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 12 19:46:16.193: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:46:16.193: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 19:46:16.193: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:46:16.193: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 19:46:16.193: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 12 19:46:16.207: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:46:16.207: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 19:46:16.207: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:46:16.207: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9cd6705a-a059-4d79-9d56-a190adc108b6 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-9cd6705a-a059-4d79-9d56-a190adc108b6 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-9cd6705a-a059-4d79-9d56-a190adc108b6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:51:22.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8375" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:306.225 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":193,"skipped":3206,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:51:22.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 12 19:51:22.410: INFO: Waiting up to 5m0s for pod "pod-a61a455a-5c37-4cee-a40f-1e8632329570" in namespace "emptydir-8194" to be "success or failure" Mar 12 19:51:22.413: INFO: Pod "pod-a61a455a-5c37-4cee-a40f-1e8632329570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.915949ms Mar 12 19:51:24.417: INFO: Pod "pod-a61a455a-5c37-4cee-a40f-1e8632329570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006665175s STEP: Saw pod success Mar 12 19:51:24.417: INFO: Pod "pod-a61a455a-5c37-4cee-a40f-1e8632329570" satisfied condition "success or failure" Mar 12 19:51:24.420: INFO: Trying to get logs from node jerma-worker2 pod pod-a61a455a-5c37-4cee-a40f-1e8632329570 container test-container: STEP: delete the pod Mar 12 19:51:24.477: INFO: Waiting for pod pod-a61a455a-5c37-4cee-a40f-1e8632329570 to disappear Mar 12 19:51:24.485: INFO: Pod pod-a61a455a-5c37-4cee-a40f-1e8632329570 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:51:24.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8194" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3227,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:51:24.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:51:24.553: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 12 19:51:24.563: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 12 19:51:29.567: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 12 19:51:29.567: INFO: Creating deployment "test-rolling-update-deployment" Mar 12 19:51:29.570: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 12 19:51:29.580: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 12 19:51:31.586: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 12 19:51:31.589: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 12 19:51:31.597: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-46 /apis/apps/v1/namespaces/deployment-46/deployments/test-rolling-update-deployment cf6eef86-8138-43ae-b38d-5c12d43e4d14 1213748 1 2020-03-12 19:51:29 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f88108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-12 19:51:29 +0000 UTC,LastTransitionTime:2020-03-12 19:51:29 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-12 19:51:31 +0000 UTC,LastTransitionTime:2020-03-12 19:51:29 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 12 19:51:31.600: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-46 /apis/apps/v1/namespaces/deployment-46/replicasets/test-rolling-update-deployment-67cf4f6444 bc4f7aa2-9837-4446-9734-c09b0dbc08fa 1213737 1 2020-03-12 19:51:29 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment cf6eef86-8138-43ae-b38d-5c12d43e4d14 0xc001f88767 0xc001f88768}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f887d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 12 19:51:31.600: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 12 19:51:31.600: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-46 /apis/apps/v1/namespaces/deployment-46/replicasets/test-rolling-update-controller 1451a6c4-5dfb-4d3c-a008-e368d60abb84 1213746 2 2020-03-12 19:51:24 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment cf6eef86-8138-43ae-b38d-5c12d43e4d14 0xc001f88697 0xc001f88698}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001f886f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 19:51:31.603: INFO: Pod "test-rolling-update-deployment-67cf4f6444-kd42d" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-kd42d test-rolling-update-deployment-67cf4f6444- deployment-46 /api/v1/namespaces/deployment-46/pods/test-rolling-update-deployment-67cf4f6444-kd42d f696c7ff-f2f7-4108-8888-9f4066dbbea0 1213736 0 2020-03-12 19:51:29 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 bc4f7aa2-9837-4446-9734-c09b0dbc08fa 0xc00202e567 0xc00202e568}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l7crc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l7crc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l7crc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:51:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:51:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:51:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 19:51:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.15,StartTime:2020-03-12 19:51:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 19:51:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c8025ff85d094af723709605d793a0286a4c534800fe5b514cf0759e78ae1850,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:51:31.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-46" for this suite. • [SLOW TEST:7.117 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":195,"skipped":3245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:51:31.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:51:47.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-799" for this suite. • [SLOW TEST:16.117 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":196,"skipped":3272,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:51:47.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 12 19:51:47.811: INFO: Waiting up to 5m0s for pod "var-expansion-59b2f44b-0da4-4f5d-a9ea-efa3572abbbf" in namespace "var-expansion-1083" to be "success or failure" Mar 12 19:51:47.821: INFO: Pod "var-expansion-59b2f44b-0da4-4f5d-a9ea-efa3572abbbf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.328303ms Mar 12 19:51:49.825: INFO: Pod "var-expansion-59b2f44b-0da4-4f5d-a9ea-efa3572abbbf": Phase="Running", Reason="", readiness=true. Elapsed: 2.014092471s Mar 12 19:51:51.829: INFO: Pod "var-expansion-59b2f44b-0da4-4f5d-a9ea-efa3572abbbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018027653s STEP: Saw pod success Mar 12 19:51:51.829: INFO: Pod "var-expansion-59b2f44b-0da4-4f5d-a9ea-efa3572abbbf" satisfied condition "success or failure" Mar 12 19:51:51.831: INFO: Trying to get logs from node jerma-worker pod var-expansion-59b2f44b-0da4-4f5d-a9ea-efa3572abbbf container dapi-container: STEP: delete the pod Mar 12 19:51:51.887: INFO: Waiting for pod var-expansion-59b2f44b-0da4-4f5d-a9ea-efa3572abbbf to disappear Mar 12 19:51:51.894: INFO: Pod var-expansion-59b2f44b-0da4-4f5d-a9ea-efa3572abbbf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:51:51.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1083" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3282,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:51:51.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8471 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 19:51:51.941: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 12 19:52:14.065: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.18:8080/dial?request=hostname&protocol=http&host=10.244.2.17&port=8080&tries=1'] Namespace:pod-network-test-8471 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:52:14.065: INFO: >>> kubeConfig: /root/.kube/config I0312 19:52:14.097314 6 log.go:172] (0xc001e4e2c0) (0xc001ae6aa0) Create stream I0312 19:52:14.097347 6 log.go:172] (0xc001e4e2c0) (0xc001ae6aa0) Stream added, broadcasting: 1 I0312 19:52:14.101862 6 log.go:172] (0xc001e4e2c0) Reply frame received for 1 I0312 19:52:14.101951 6 log.go:172] (0xc001e4e2c0) (0xc0014cc000) Create stream I0312 19:52:14.101987 6 log.go:172] (0xc001e4e2c0) (0xc0014cc000) Stream added, broadcasting: 3 I0312 19:52:14.105398 6 log.go:172] (0xc001e4e2c0) Reply frame received for 3 I0312 19:52:14.105428 6 log.go:172] (0xc001e4e2c0) (0xc001ae6be0) Create stream I0312 19:52:14.105439 6 log.go:172] (0xc001e4e2c0) (0xc001ae6be0) Stream added, broadcasting: 5 I0312 19:52:14.106484 6 log.go:172] (0xc001e4e2c0) Reply frame received for 5 I0312 19:52:14.167150 6 log.go:172] (0xc001e4e2c0) Data frame received for 3 I0312 19:52:14.167179 6 log.go:172] (0xc0014cc000) (3) Data frame handling I0312 19:52:14.167273 6 log.go:172] (0xc0014cc000) (3) Data frame sent I0312 19:52:14.167591 6 log.go:172] (0xc001e4e2c0) Data frame received for 3 I0312 19:52:14.167617 6 log.go:172] (0xc0014cc000) (3) Data frame handling I0312 19:52:14.167987 6 log.go:172] (0xc001e4e2c0) Data frame received for 5 I0312 19:52:14.168000 6 log.go:172] (0xc001ae6be0) (5) Data frame handling I0312 19:52:14.169086 6 log.go:172] (0xc001e4e2c0) Data frame received for 1 I0312 19:52:14.169111 6 log.go:172] (0xc001ae6aa0) (1) Data frame handling I0312 19:52:14.169123 6 log.go:172] (0xc001ae6aa0) (1) Data frame sent I0312 19:52:14.169140 6 log.go:172] (0xc001e4e2c0) (0xc001ae6aa0) Stream removed, broadcasting: 1 I0312 19:52:14.169160 6 log.go:172] (0xc001e4e2c0) Go away received I0312 19:52:14.169239 6 log.go:172] (0xc001e4e2c0) (0xc001ae6aa0) Stream removed, broadcasting: 1 I0312 19:52:14.169261 6 log.go:172] (0xc001e4e2c0) (0xc0014cc000) Stream removed, broadcasting: 3 I0312 19:52:14.169297 6 log.go:172] (0xc001e4e2c0) (0xc001ae6be0) Stream removed, broadcasting: 5 Mar 12 19:52:14.169: INFO: Waiting for responses: map[] Mar 12 19:52:14.171: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.18:8080/dial?request=hostname&protocol=http&host=10.244.1.15&port=8080&tries=1'] Namespace:pod-network-test-8471 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:52:14.172: INFO: >>> kubeConfig: /root/.kube/config I0312 19:52:14.198248 6 log.go:172] (0xc001c9a580) (0xc001bbcf00) Create stream I0312 19:52:14.198274 6 log.go:172] (0xc001c9a580) (0xc001bbcf00) Stream added, broadcasting: 1 I0312 19:52:14.200796 6 log.go:172] (0xc001c9a580) Reply frame received for 1 I0312 19:52:14.200830 6 log.go:172] (0xc001c9a580) (0xc0014cc1e0) Create stream I0312 19:52:14.200841 6 log.go:172] (0xc001c9a580) (0xc0014cc1e0) Stream added, broadcasting: 3 I0312 19:52:14.201616 6 log.go:172] (0xc001c9a580) Reply frame received for 3 I0312 19:52:14.201644 6 log.go:172] (0xc001c9a580) (0xc001bbcfa0) Create stream I0312 19:52:14.201652 6 log.go:172] (0xc001c9a580) (0xc001bbcfa0) Stream added, broadcasting: 5 I0312 19:52:14.202431 6 log.go:172] (0xc001c9a580) Reply frame received for 5 I0312 19:52:14.283717 6 log.go:172] (0xc001c9a580) Data frame received for 3 I0312 19:52:14.283732 6 log.go:172] (0xc0014cc1e0) (3) Data frame handling I0312 19:52:14.283744 6 log.go:172] (0xc0014cc1e0) (3) Data frame sent I0312 19:52:14.284156 6 log.go:172] (0xc001c9a580) Data frame received for 3 I0312 19:52:14.284171 6 log.go:172] (0xc0014cc1e0) (3) Data frame handling I0312 19:52:14.284185 6 log.go:172] (0xc001c9a580) Data frame received for 5 I0312 19:52:14.284199 6 log.go:172] (0xc001bbcfa0) (5) Data frame handling I0312 19:52:14.285179 6 log.go:172] (0xc001c9a580) Data frame received for 1 I0312 19:52:14.285190 6 log.go:172] (0xc001bbcf00) (1) Data frame handling I0312 19:52:14.285196 6 log.go:172] (0xc001bbcf00) (1) Data frame sent I0312 19:52:14.285204 6 log.go:172] (0xc001c9a580) (0xc001bbcf00) Stream removed, broadcasting: 1 I0312 19:52:14.285213 6 log.go:172] (0xc001c9a580) Go away received I0312 19:52:14.285329 6 log.go:172] (0xc001c9a580) (0xc001bbcf00) Stream removed, broadcasting: 1 I0312 19:52:14.285343 6 log.go:172] (0xc001c9a580) (0xc0014cc1e0) Stream removed, broadcasting: 3 I0312 19:52:14.285354 6 log.go:172] (0xc001c9a580) (0xc001bbcfa0) Stream removed, broadcasting: 5 Mar 12 19:52:14.285: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:52:14.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8471" for this suite. • [SLOW TEST:22.388 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:52:14.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:52:14.363: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0179ab71-defb-4614-b644-7ccdc9703dfb" in namespace "downward-api-9289" to be "success or failure" Mar 12 19:52:14.390: INFO: Pod "downwardapi-volume-0179ab71-defb-4614-b644-7ccdc9703dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 27.486698ms Mar 12 19:52:16.393: INFO: Pod "downwardapi-volume-0179ab71-defb-4614-b644-7ccdc9703dfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029627977s STEP: Saw pod success Mar 12 19:52:16.393: INFO: Pod "downwardapi-volume-0179ab71-defb-4614-b644-7ccdc9703dfb" satisfied condition "success or failure" Mar 12 19:52:16.394: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0179ab71-defb-4614-b644-7ccdc9703dfb container client-container: STEP: delete the pod Mar 12 19:52:16.413: INFO: Waiting for pod downwardapi-volume-0179ab71-defb-4614-b644-7ccdc9703dfb to disappear Mar 12 19:52:16.429: INFO: Pod downwardapi-volume-0179ab71-defb-4614-b644-7ccdc9703dfb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:52:16.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9289" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3325,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:52:16.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2160.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2160.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2160.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2160.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 132.39.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.39.132_udp@PTR;check="$$(dig +tcp +noall +answer +search 132.39.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.39.132_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2160.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2160.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2160.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2160.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2160.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2160.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2160.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 132.39.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.39.132_udp@PTR;check="$$(dig +tcp +noall +answer +search 132.39.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.39.132_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 19:52:20.591: INFO: Unable to read wheezy_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:20.594: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:20.597: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:20.600: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:20.618: INFO: Unable to read jessie_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:20.620: INFO: Unable to read jessie_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:20.623: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:20.625: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:20.641: INFO: Lookups using dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329 failed for: [wheezy_udp@dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_udp@dns-test-service.dns-2160.svc.cluster.local jessie_tcp@dns-test-service.dns-2160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local] Mar 12 19:52:25.645: INFO: Unable to read wheezy_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:25.648: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:25.651: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:25.655: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:25.677: INFO: Unable to read jessie_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:25.680: INFO: Unable to read jessie_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:25.683: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:25.686: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:25.702: INFO: Lookups using dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329 failed for: [wheezy_udp@dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_udp@dns-test-service.dns-2160.svc.cluster.local jessie_tcp@dns-test-service.dns-2160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local] Mar 12 19:52:30.645: INFO: Unable to read wheezy_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:30.648: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:30.651: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:30.654: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:30.673: INFO: Unable to read jessie_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:30.675: INFO: Unable to read jessie_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:30.678: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:30.680: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:30.697: INFO: Lookups using dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329 failed for: [wheezy_udp@dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_udp@dns-test-service.dns-2160.svc.cluster.local jessie_tcp@dns-test-service.dns-2160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local] Mar 12 19:52:35.646: INFO: Unable to read wheezy_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:35.648: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:35.651: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:35.653: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:35.669: INFO: Unable to read jessie_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:35.672: INFO: Unable to read jessie_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:35.674: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:35.676: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:35.708: INFO: Lookups using dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329 failed for: [wheezy_udp@dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_udp@dns-test-service.dns-2160.svc.cluster.local jessie_tcp@dns-test-service.dns-2160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local] Mar 12 19:52:40.644: INFO: Unable to read wheezy_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:40.646: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:40.648: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:40.651: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:40.665: INFO: Unable to read jessie_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:40.668: INFO: Unable to read jessie_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:40.671: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:40.674: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:40.688: INFO: Lookups using dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329 failed for: [wheezy_udp@dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_udp@dns-test-service.dns-2160.svc.cluster.local jessie_tcp@dns-test-service.dns-2160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local] Mar 12 19:52:45.645: INFO: Unable to read wheezy_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:45.649: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:45.652: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:45.655: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:45.676: INFO: Unable to read jessie_udp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:45.679: INFO: Unable to read jessie_tcp@dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:45.682: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:45.684: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local from pod dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329: the server could not find the requested resource (get pods dns-test-362c2b84-a960-4258-b254-8ed50e62d329) Mar 12 19:52:45.700: INFO: Lookups using dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329 failed for: [wheezy_udp@dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@dns-test-service.dns-2160.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_udp@dns-test-service.dns-2160.svc.cluster.local jessie_tcp@dns-test-service.dns-2160.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2160.svc.cluster.local] Mar 12 19:52:50.708: INFO: DNS probes using dns-2160/dns-test-362c2b84-a960-4258-b254-8ed50e62d329 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:52:50.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2160" for this suite. • [SLOW TEST:34.498 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":200,"skipped":3338,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:52:50.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:52:50.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2276' Mar 12 19:52:53.222: INFO: stderr: "" Mar 12 19:52:53.222: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 12 19:52:53.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2276' Mar 12 19:52:53.516: INFO: stderr: "" Mar 12 19:52:53.516: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 19:52:54.533: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 19:52:54.533: INFO: Found 0 / 1 Mar 12 19:52:55.520: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 19:52:55.520: INFO: Found 1 / 1 Mar 12 19:52:55.520: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 12 19:52:55.524: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 19:52:55.524: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 19:52:55.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-r9dcl --namespace=kubectl-2276' Mar 12 19:52:55.631: INFO: stderr: "" Mar 12 19:52:55.631: INFO: stdout: "Name: agnhost-master-r9dcl\nNamespace: kubectl-2276\nPriority: 0\nNode: jerma-worker2/172.17.0.5\nStart Time: Thu, 12 Mar 2020 19:52:53 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.17\nIPs:\n IP: 10.244.1.17\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://966d6e8e4cbb230f0b8a786df41543ea7b7002c2fc6bba034964d02b79f1c278\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 12 Mar 2020 19:52:54 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-22ll9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-22ll9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-22ll9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-2276/agnhost-master-r9dcl to jerma-worker2\n Normal Pulled 1s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Mar 12 19:52:55.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2276' Mar 12 19:52:55.736: INFO: stderr: "" Mar 12 19:52:55.736: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2276\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-master-r9dcl\n" Mar 12 19:52:55.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2276' Mar 12 19:52:55.812: INFO: stderr: "" Mar 12 19:52:55.812: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2276\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.61.76\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.17:6379\nSession Affinity: None\nEvents: \n" Mar 12 19:52:55.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 12 19:52:55.901: INFO: stderr: "" Mar 12 19:52:55.901: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:47:04 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Thu, 12 Mar 2020 19:52:54 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 12 Mar 2020 19:50:47 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 12 Mar 2020 19:50:47 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 12 Mar 2020 19:50:47 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 12 Mar 2020 19:50:47 +0000 Sun, 08 Mar 2020 14:48:18 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: 3f4950fefd574d4aaa94513c5781e5d9\n System UUID: 58a385c4-2d08-428a-9405-5e6b12d5bd17\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-6n4ms 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d5h\n kube-system coredns-6955765f44-nlwfn 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d5h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kindnet-2glhp 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d5h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kube-proxy-zmch2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n local-path-storage local-path-provisioner-85445b74d4-gpcbt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 12 19:52:55.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2276' Mar 12 19:52:55.995: INFO: stderr: "" Mar 12 19:52:55.995: INFO: stdout: "Name: kubectl-2276\nLabels: e2e-framework=kubectl\n e2e-run=c212bd9b-05e1-49f9-850c-bcbb3611cb19\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:52:55.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2276" for this suite. • [SLOW TEST:5.067 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1154 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":201,"skipped":3355,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:52:56.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-eceb6d30-aea9-43ea-9d96-05f7d3ce216a STEP: Creating secret with name s-test-opt-upd-db919059-98fb-4b7c-93cd-d1ab2e02c736 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-eceb6d30-aea9-43ea-9d96-05f7d3ce216a STEP: Updating secret s-test-opt-upd-db919059-98fb-4b7c-93cd-d1ab2e02c736 STEP: Creating secret with name s-test-opt-create-e07b426c-b211-47e2-b62b-875ae3f326db STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:53:02.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5876" for this suite. • [SLOW TEST:6.204 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3363,"failed":0} [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:53:02.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 12 19:53:02.269: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 12 19:53:07.287: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:53:07.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6280" for this suite. • [SLOW TEST:5.164 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":203,"skipped":3363,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:53:07.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7819 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7819 STEP: creating replication controller externalsvc in namespace services-7819 I0312 19:53:07.587340 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7819, replica count: 2 I0312 19:53:10.637717 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 12 19:53:10.688: INFO: Creating new exec pod Mar 12 19:53:12.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7819 execpod4vdlv -- /bin/sh -x -c nslookup nodeport-service' Mar 12 19:53:12.918: INFO: stderr: "I0312 19:53:12.830027 3413 log.go:172] (0xc0009dd970) (0xc00090a8c0) Create stream\nI0312 19:53:12.830069 3413 log.go:172] (0xc0009dd970) (0xc00090a8c0) Stream added, broadcasting: 1\nI0312 19:53:12.833256 3413 log.go:172] (0xc0009dd970) Reply frame received for 1\nI0312 19:53:12.833284 3413 log.go:172] (0xc0009dd970) (0xc0005ee5a0) Create stream\nI0312 19:53:12.833290 3413 log.go:172] (0xc0009dd970) (0xc0005ee5a0) Stream added, broadcasting: 3\nI0312 19:53:12.833797 3413 log.go:172] (0xc0009dd970) Reply frame received for 3\nI0312 19:53:12.833817 3413 log.go:172] (0xc0009dd970) (0xc000733360) Create stream\nI0312 19:53:12.833823 3413 log.go:172] (0xc0009dd970) (0xc000733360) Stream added, broadcasting: 5\nI0312 19:53:12.834427 3413 log.go:172] (0xc0009dd970) Reply frame received for 5\nI0312 19:53:12.907330 3413 log.go:172] (0xc0009dd970) Data frame received for 5\nI0312 19:53:12.907348 3413 log.go:172] (0xc000733360) (5) Data frame handling\nI0312 19:53:12.907358 3413 log.go:172] (0xc000733360) (5) Data frame sent\n+ nslookup nodeport-service\nI0312 19:53:12.912149 3413 log.go:172] (0xc0009dd970) Data frame received for 3\nI0312 19:53:12.912164 3413 log.go:172] (0xc0005ee5a0) (3) Data frame handling\nI0312 19:53:12.912173 3413 log.go:172] (0xc0005ee5a0) (3) Data frame sent\nI0312 19:53:12.913141 3413 log.go:172] (0xc0009dd970) Data frame received for 3\nI0312 19:53:12.913150 3413 log.go:172] (0xc0005ee5a0) (3) Data frame handling\nI0312 19:53:12.913161 3413 log.go:172] (0xc0005ee5a0) (3) Data frame sent\nI0312 19:53:12.913403 3413 log.go:172] (0xc0009dd970) Data frame received for 3\nI0312 19:53:12.913417 3413 log.go:172] (0xc0005ee5a0) (3) Data frame handling\nI0312 19:53:12.913460 3413 log.go:172] (0xc0009dd970) Data frame received for 5\nI0312 19:53:12.913470 3413 log.go:172] (0xc000733360) (5) Data frame handling\nI0312 19:53:12.915214 3413 log.go:172] (0xc0009dd970) Data frame received for 1\nI0312 19:53:12.915233 3413 log.go:172] (0xc00090a8c0) (1) Data frame handling\nI0312 19:53:12.915243 3413 log.go:172] (0xc00090a8c0) (1) Data frame sent\nI0312 19:53:12.915256 3413 log.go:172] (0xc0009dd970) (0xc00090a8c0) Stream removed, broadcasting: 1\nI0312 19:53:12.915273 3413 log.go:172] (0xc0009dd970) Go away received\nI0312 19:53:12.915521 3413 log.go:172] (0xc0009dd970) (0xc00090a8c0) Stream removed, broadcasting: 1\nI0312 19:53:12.915533 3413 log.go:172] (0xc0009dd970) (0xc0005ee5a0) Stream removed, broadcasting: 3\nI0312 19:53:12.915538 3413 log.go:172] (0xc0009dd970) (0xc000733360) Stream removed, broadcasting: 5\n" Mar 12 19:53:12.918: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7819.svc.cluster.local\tcanonical name = externalsvc.services-7819.svc.cluster.local.\nName:\texternalsvc.services-7819.svc.cluster.local\nAddress: 10.102.211.10\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7819, will wait for the garbage collector to delete the pods Mar 12 19:53:13.011: INFO: Deleting ReplicationController externalsvc took: 3.428002ms Mar 12 19:53:13.111: INFO: Terminating ReplicationController externalsvc pods took: 100.194453ms Mar 12 19:53:26.127: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:53:26.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7819" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.794 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":204,"skipped":3371,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:53:26.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:53:28.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8828" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":205,"skipped":3384,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:53:28.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:53:30.474: INFO: Waiting up to 5m0s for pod "client-envvars-bcad5698-9def-425e-9103-01a38e68a3cd" in namespace "pods-1333" to be "success or failure" Mar 12 19:53:30.479: INFO: Pod "client-envvars-bcad5698-9def-425e-9103-01a38e68a3cd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.309395ms Mar 12 19:53:32.482: INFO: Pod "client-envvars-bcad5698-9def-425e-9103-01a38e68a3cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008719035s STEP: Saw pod success Mar 12 19:53:32.482: INFO: Pod "client-envvars-bcad5698-9def-425e-9103-01a38e68a3cd" satisfied condition "success or failure" Mar 12 19:53:32.485: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-bcad5698-9def-425e-9103-01a38e68a3cd container env3cont: STEP: delete the pod Mar 12 19:53:32.512: INFO: Waiting for pod client-envvars-bcad5698-9def-425e-9103-01a38e68a3cd to disappear Mar 12 19:53:32.521: INFO: Pod client-envvars-bcad5698-9def-425e-9103-01a38e68a3cd no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:53:32.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1333" for this suite. •{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3395,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:53:32.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-a1fe23e6-fbd1-48ad-884a-ae1b3f83b3e2 STEP: Creating a pod to test consume configMaps Mar 12 19:53:32.625: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf8ef48b-5743-407b-b215-4fa9fdb9aafa" in namespace "projected-8024" to be "success or failure" Mar 12 19:53:32.629: INFO: Pod "pod-projected-configmaps-bf8ef48b-5743-407b-b215-4fa9fdb9aafa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.852635ms Mar 12 19:53:34.632: INFO: Pod "pod-projected-configmaps-bf8ef48b-5743-407b-b215-4fa9fdb9aafa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006643455s STEP: Saw pod success Mar 12 19:53:34.632: INFO: Pod "pod-projected-configmaps-bf8ef48b-5743-407b-b215-4fa9fdb9aafa" satisfied condition "success or failure" Mar 12 19:53:34.634: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-bf8ef48b-5743-407b-b215-4fa9fdb9aafa container projected-configmap-volume-test: STEP: delete the pod Mar 12 19:53:34.689: INFO: Waiting for pod pod-projected-configmaps-bf8ef48b-5743-407b-b215-4fa9fdb9aafa to disappear Mar 12 19:53:34.718: INFO: Pod pod-projected-configmaps-bf8ef48b-5743-407b-b215-4fa9fdb9aafa no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:53:34.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8024" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3396,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:53:34.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 12 19:53:34.762: INFO: Waiting up to 5m0s for pod "pod-540e3ec3-eaa8-41eb-95d9-0b4b946b5bd1" in namespace "emptydir-9613" to be "success or failure" Mar 12 19:53:34.766: INFO: Pod "pod-540e3ec3-eaa8-41eb-95d9-0b4b946b5bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.672583ms Mar 12 19:53:36.769: INFO: Pod "pod-540e3ec3-eaa8-41eb-95d9-0b4b946b5bd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007302896s STEP: Saw pod success Mar 12 19:53:36.769: INFO: Pod "pod-540e3ec3-eaa8-41eb-95d9-0b4b946b5bd1" satisfied condition "success or failure" Mar 12 19:53:36.771: INFO: Trying to get logs from node jerma-worker2 pod pod-540e3ec3-eaa8-41eb-95d9-0b4b946b5bd1 container test-container: STEP: delete the pod Mar 12 19:53:36.815: INFO: Waiting for pod pod-540e3ec3-eaa8-41eb-95d9-0b4b946b5bd1 to disappear Mar 12 19:53:36.818: INFO: Pod pod-540e3ec3-eaa8-41eb-95d9-0b4b946b5bd1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:53:36.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9613" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3414,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:53:36.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-xxpl STEP: Creating a pod to test atomic-volume-subpath Mar 12 19:53:36.880: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-xxpl" in namespace "subpath-2850" to be "success or failure" Mar 12 19:53:36.884: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.416055ms Mar 12 19:53:38.887: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Running", Reason="", readiness=true. Elapsed: 2.00782214s Mar 12 19:53:40.891: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Running", Reason="", readiness=true. Elapsed: 4.011239549s Mar 12 19:53:42.909: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Running", Reason="", readiness=true. Elapsed: 6.02970677s Mar 12 19:53:44.918: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Running", Reason="", readiness=true. Elapsed: 8.038112782s Mar 12 19:53:46.922: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Running", Reason="", readiness=true. Elapsed: 10.042049063s Mar 12 19:53:48.925: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Running", Reason="", readiness=true. Elapsed: 12.045554539s Mar 12 19:53:50.929: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Running", Reason="", readiness=true. Elapsed: 14.049681654s Mar 12 19:53:52.940: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Running", Reason="", readiness=true. Elapsed: 16.060044554s Mar 12 19:53:54.943: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Running", Reason="", readiness=true. Elapsed: 18.063728922s Mar 12 19:53:56.947: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Running", Reason="", readiness=true. Elapsed: 20.067479528s Mar 12 19:53:58.951: INFO: Pod "pod-subpath-test-projected-xxpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.071325149s STEP: Saw pod success Mar 12 19:53:58.951: INFO: Pod "pod-subpath-test-projected-xxpl" satisfied condition "success or failure" Mar 12 19:53:58.953: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-xxpl container test-container-subpath-projected-xxpl: STEP: delete the pod Mar 12 19:53:58.986: INFO: Waiting for pod pod-subpath-test-projected-xxpl to disappear Mar 12 19:53:58.988: INFO: Pod pod-subpath-test-projected-xxpl no longer exists STEP: Deleting pod pod-subpath-test-projected-xxpl Mar 12 19:53:58.989: INFO: Deleting pod "pod-subpath-test-projected-xxpl" in namespace "subpath-2850" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:53:58.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2850" for this suite. • [SLOW TEST:22.172 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":209,"skipped":3416,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:53:58.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 12 19:53:59.629: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:54:02.663: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:54:02.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:54:03.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8103" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:5.095 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":210,"skipped":3424,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:54:04.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 19:54:04.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9047' Mar 12 19:54:04.232: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 19:54:04.232: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 12 19:54:04.271: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-skzcq] Mar 12 19:54:04.271: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-skzcq" in namespace "kubectl-9047" to be "running and ready" Mar 12 19:54:04.273: INFO: Pod "e2e-test-httpd-rc-skzcq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379364ms Mar 12 19:54:06.275: INFO: Pod "e2e-test-httpd-rc-skzcq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004454878s Mar 12 19:54:08.278: INFO: Pod "e2e-test-httpd-rc-skzcq": Phase="Running", Reason="", readiness=true. Elapsed: 4.007271299s Mar 12 19:54:08.278: INFO: Pod "e2e-test-httpd-rc-skzcq" satisfied condition "running and ready" Mar 12 19:54:08.278: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-skzcq] Mar 12 19:54:08.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9047' Mar 12 19:54:08.400: INFO: stderr: "" Mar 12 19:54:08.400: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.28. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.28. Set the 'ServerName' directive globally to suppress this message\n[Thu Mar 12 19:54:05.427240 2020] [mpm_event:notice] [pid 1:tid 139633027906408] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Mar 12 19:54:05.427298 2020] [core:notice] [pid 1:tid 139633027906408] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637 Mar 12 19:54:08.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9047' Mar 12 19:54:08.489: INFO: stderr: "" Mar 12 19:54:08.489: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:54:08.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9047" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":211,"skipped":3431,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:54:08.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 12 19:54:08.566: INFO: namespace kubectl-2615 Mar 12 19:54:08.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2615' Mar 12 19:54:08.907: INFO: stderr: "" Mar 12 19:54:08.907: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 19:54:09.912: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 19:54:09.912: INFO: Found 0 / 1 Mar 12 19:54:10.910: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 19:54:10.910: INFO: Found 1 / 1 Mar 12 19:54:10.911: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 12 19:54:10.913: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 19:54:10.913: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 19:54:10.913: INFO: wait on agnhost-master startup in kubectl-2615 Mar 12 19:54:10.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-h5cv5 agnhost-master --namespace=kubectl-2615' Mar 12 19:54:11.035: INFO: stderr: "" Mar 12 19:54:11.035: INFO: stdout: "Paused\n" STEP: exposing RC Mar 12 19:54:11.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2615' Mar 12 19:54:11.156: INFO: stderr: "" Mar 12 19:54:11.156: INFO: stdout: "service/rm2 exposed\n" Mar 12 19:54:11.173: INFO: Service rm2 in namespace kubectl-2615 found. STEP: exposing service Mar 12 19:54:13.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2615' Mar 12 19:54:13.510: INFO: stderr: "" Mar 12 19:54:13.510: INFO: stdout: "service/rm3 exposed\n" Mar 12 19:54:13.566: INFO: Service rm3 in namespace kubectl-2615 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:54:15.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2615" for this suite. • [SLOW TEST:7.048 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":212,"skipped":3441,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:54:15.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 12 19:54:18.182: INFO: Successfully updated pod "labelsupdateb5fc99c5-246b-4e49-8d31-d1b7ea289535" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:54:20.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6411" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3448,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:54:20.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 12 19:54:20.277: INFO: Created pod &Pod{ObjectMeta:{dns-8980 dns-8980 /api/v1/namespaces/dns-8980/pods/dns-8980 b7a4ba6e-7294-48f4-bae2-30981e54b112 1214979 0 2020-03-12 19:54:20 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pr87,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pr87,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pr87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 12 19:54:24.285: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8980 PodName:dns-8980 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:54:24.285: INFO: >>> kubeConfig: /root/.kube/config I0312 19:54:24.320371 6 log.go:172] (0xc00173a420) (0xc001df9900) Create stream I0312 19:54:24.320400 6 log.go:172] (0xc00173a420) (0xc001df9900) Stream added, broadcasting: 1 I0312 19:54:24.322387 6 log.go:172] (0xc00173a420) Reply frame received for 1 I0312 19:54:24.322431 6 log.go:172] (0xc00173a420) (0xc001ede000) Create stream I0312 19:54:24.322450 6 log.go:172] (0xc00173a420) (0xc001ede000) Stream added, broadcasting: 3 I0312 19:54:24.323433 6 log.go:172] (0xc00173a420) Reply frame received for 3 I0312 19:54:24.323480 6 log.go:172] (0xc00173a420) (0xc001df99a0) Create stream I0312 19:54:24.323490 6 log.go:172] (0xc00173a420) (0xc001df99a0) Stream added, broadcasting: 5 I0312 19:54:24.324778 6 log.go:172] (0xc00173a420) Reply frame received for 5 I0312 19:54:24.396161 6 log.go:172] (0xc00173a420) Data frame received for 3 I0312 19:54:24.396199 6 log.go:172] (0xc001ede000) (3) Data frame handling I0312 19:54:24.396229 6 log.go:172] (0xc001ede000) (3) Data frame sent I0312 19:54:24.397151 6 log.go:172] (0xc00173a420) Data frame received for 5 I0312 19:54:24.397186 6 log.go:172] (0xc001df99a0) (5) Data frame handling I0312 19:54:24.397226 6 log.go:172] (0xc00173a420) Data frame received for 3 I0312 19:54:24.397253 6 log.go:172] (0xc001ede000) (3) Data frame handling I0312 19:54:24.398664 6 log.go:172] (0xc00173a420) Data frame received for 1 I0312 19:54:24.398697 6 log.go:172] (0xc001df9900) (1) Data frame handling I0312 19:54:24.398753 6 log.go:172] (0xc001df9900) (1) Data frame sent I0312 19:54:24.398777 6 log.go:172] (0xc00173a420) (0xc001df9900) Stream removed, broadcasting: 1 I0312 19:54:24.398794 6 log.go:172] (0xc00173a420) Go away received I0312 19:54:24.398930 6 log.go:172] (0xc00173a420) (0xc001df9900) Stream removed, broadcasting: 1 I0312 19:54:24.398949 6 log.go:172] (0xc00173a420) (0xc001ede000) Stream removed, broadcasting: 3 I0312 19:54:24.398958 6 log.go:172] (0xc00173a420) (0xc001df99a0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 12 19:54:24.398: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8980 PodName:dns-8980 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:54:24.399: INFO: >>> kubeConfig: /root/.kube/config I0312 19:54:24.424952 6 log.go:172] (0xc001c9a4d0) (0xc001a70a00) Create stream I0312 19:54:24.424974 6 log.go:172] (0xc001c9a4d0) (0xc001a70a00) Stream added, broadcasting: 1 I0312 19:54:24.428232 6 log.go:172] (0xc001c9a4d0) Reply frame received for 1 I0312 19:54:24.428284 6 log.go:172] (0xc001c9a4d0) (0xc001f71180) Create stream I0312 19:54:24.428306 6 log.go:172] (0xc001c9a4d0) (0xc001f71180) Stream added, broadcasting: 3 I0312 19:54:24.431719 6 log.go:172] (0xc001c9a4d0) Reply frame received for 3 I0312 19:54:24.431765 6 log.go:172] (0xc001c9a4d0) (0xc001df9ae0) Create stream I0312 19:54:24.431784 6 log.go:172] (0xc001c9a4d0) (0xc001df9ae0) Stream added, broadcasting: 5 I0312 19:54:24.433627 6 log.go:172] (0xc001c9a4d0) Reply frame received for 5 I0312 19:54:24.505214 6 log.go:172] (0xc001c9a4d0) Data frame received for 3 I0312 19:54:24.505258 6 log.go:172] (0xc001f71180) (3) Data frame handling I0312 19:54:24.505290 6 log.go:172] (0xc001f71180) (3) Data frame sent I0312 19:54:24.505639 6 log.go:172] (0xc001c9a4d0) Data frame received for 3 I0312 19:54:24.505656 6 log.go:172] (0xc001f71180) (3) Data frame handling I0312 19:54:24.505952 6 log.go:172] (0xc001c9a4d0) Data frame received for 5 I0312 19:54:24.505967 6 log.go:172] (0xc001df9ae0) (5) Data frame handling I0312 19:54:24.507540 6 log.go:172] (0xc001c9a4d0) Data frame received for 1 I0312 19:54:24.507564 6 log.go:172] (0xc001a70a00) (1) Data frame handling I0312 19:54:24.507577 6 log.go:172] (0xc001a70a00) (1) Data frame sent I0312 19:54:24.507592 6 log.go:172] (0xc001c9a4d0) (0xc001a70a00) Stream removed, broadcasting: 1 I0312 19:54:24.507611 6 log.go:172] (0xc001c9a4d0) Go away received I0312 19:54:24.507878 6 log.go:172] (0xc001c9a4d0) (0xc001a70a00) Stream removed, broadcasting: 1 I0312 19:54:24.507899 6 log.go:172] (0xc001c9a4d0) (0xc001f71180) Stream removed, broadcasting: 3 I0312 19:54:24.507910 6 log.go:172] (0xc001c9a4d0) (0xc001df9ae0) Stream removed, broadcasting: 5 Mar 12 19:54:24.507: INFO: Deleting pod dns-8980... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:54:24.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8980" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":214,"skipped":3458,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:54:24.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:54:24.638: INFO: Create a RollingUpdate DaemonSet Mar 12 19:54:24.641: INFO: Check that daemon pods launch on every node of the cluster Mar 12 19:54:24.662: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:54:24.667: INFO: Number of nodes with available pods: 0 Mar 12 19:54:24.667: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:54:25.671: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:54:25.673: INFO: Number of nodes with available pods: 0 Mar 12 19:54:25.673: INFO: Node jerma-worker is running more than one daemon pod Mar 12 19:54:26.671: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:54:26.674: INFO: Number of nodes with available pods: 2 Mar 12 19:54:26.674: INFO: Number of running nodes: 2, number of available pods: 2 Mar 12 19:54:26.674: INFO: Update the DaemonSet to trigger a rollout Mar 12 19:54:26.680: INFO: Updating DaemonSet daemon-set Mar 12 19:54:30.709: INFO: Roll back the DaemonSet before rollout is complete Mar 12 19:54:30.736: INFO: Updating DaemonSet daemon-set Mar 12 19:54:30.736: INFO: Make sure DaemonSet rollback is complete Mar 12 19:54:30.739: INFO: Wrong image for pod: daemon-set-7kt5w. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 12 19:54:30.739: INFO: Pod daemon-set-7kt5w is not available Mar 12 19:54:30.743: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:54:31.761: INFO: Wrong image for pod: daemon-set-7kt5w. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 12 19:54:31.761: INFO: Pod daemon-set-7kt5w is not available Mar 12 19:54:31.768: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 19:54:32.747: INFO: Pod daemon-set-m784k is not available Mar 12 19:54:32.750: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5959, will wait for the garbage collector to delete the pods Mar 12 19:54:32.812: INFO: Deleting DaemonSet.extensions daemon-set took: 4.661288ms Mar 12 19:54:33.112: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.199587ms Mar 12 19:54:46.115: INFO: Number of nodes with available pods: 0 Mar 12 19:54:46.115: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 19:54:46.118: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5959/daemonsets","resourceVersion":"1215190"},"items":null} Mar 12 19:54:46.120: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5959/pods","resourceVersion":"1215190"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:54:46.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5959" for this suite. • [SLOW TEST:21.613 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":215,"skipped":3473,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:54:46.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:55:02.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7834" for this suite. • [SLOW TEST:16.221 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":216,"skipped":3477,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:55:02.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:55:02.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56c01c38-61e3-4151-894e-c08866f84315" in namespace "projected-8648" to be "success or failure" Mar 12 19:55:02.438: INFO: Pod "downwardapi-volume-56c01c38-61e3-4151-894e-c08866f84315": Phase="Pending", Reason="", readiness=false. Elapsed: 2.694775ms Mar 12 19:55:04.442: INFO: Pod "downwardapi-volume-56c01c38-61e3-4151-894e-c08866f84315": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006532091s STEP: Saw pod success Mar 12 19:55:04.442: INFO: Pod "downwardapi-volume-56c01c38-61e3-4151-894e-c08866f84315" satisfied condition "success or failure" Mar 12 19:55:04.445: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-56c01c38-61e3-4151-894e-c08866f84315 container client-container: STEP: delete the pod Mar 12 19:55:04.491: INFO: Waiting for pod downwardapi-volume-56c01c38-61e3-4151-894e-c08866f84315 to disappear Mar 12 19:55:04.500: INFO: Pod downwardapi-volume-56c01c38-61e3-4151-894e-c08866f84315 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:55:04.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8648" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3479,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:55:04.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 12 19:55:04.581: INFO: Waiting up to 5m0s for pod "pod-66f0eb80-7f2c-4f84-8dc3-d40d6cbe4776" in namespace "emptydir-2755" to be "success or failure" Mar 12 19:55:04.584: INFO: Pod "pod-66f0eb80-7f2c-4f84-8dc3-d40d6cbe4776": Phase="Pending", Reason="", readiness=false. Elapsed: 3.183565ms Mar 12 19:55:06.587: INFO: Pod "pod-66f0eb80-7f2c-4f84-8dc3-d40d6cbe4776": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006443116s STEP: Saw pod success Mar 12 19:55:06.587: INFO: Pod "pod-66f0eb80-7f2c-4f84-8dc3-d40d6cbe4776" satisfied condition "success or failure" Mar 12 19:55:06.589: INFO: Trying to get logs from node jerma-worker2 pod pod-66f0eb80-7f2c-4f84-8dc3-d40d6cbe4776 container test-container: STEP: delete the pod Mar 12 19:55:06.603: INFO: Waiting for pod pod-66f0eb80-7f2c-4f84-8dc3-d40d6cbe4776 to disappear Mar 12 19:55:06.608: INFO: Pod pod-66f0eb80-7f2c-4f84-8dc3-d40d6cbe4776 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:55:06.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2755" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3484,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:55:06.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 12 19:55:06.718: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 19:55:06.741: INFO: Waiting for terminating namespaces to be deleted... Mar 12 19:55:06.743: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 12 19:55:06.748: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:55:06.748: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 19:55:06.748: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:55:06.748: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 19:55:06.748: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 12 19:55:06.754: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:55:06.754: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 19:55:06.754: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 19:55:06.754: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1b0be90f-52d2-4b45-9b03-080a3bec4c72 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-1b0be90f-52d2-4b45-9b03-080a3bec4c72 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-1b0be90f-52d2-4b45-9b03-080a3bec4c72 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:55:16.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-911" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.318 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":219,"skipped":3546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:55:16.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-a89fac6e-f54b-4cf9-b379-0e798308b1e1 STEP: Creating configMap with name cm-test-opt-upd-f7b9709e-4ea6-4d8f-b136-a3888f63f31f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a89fac6e-f54b-4cf9-b379-0e798308b1e1 STEP: Updating configmap cm-test-opt-upd-f7b9709e-4ea6-4d8f-b136-a3888f63f31f STEP: Creating configMap with name cm-test-opt-create-c25ff6b4-3007-451c-899c-525c23d9a29a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:56:51.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5910" for this suite. • [SLOW TEST:94.534 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3576,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:56:51.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 12 19:56:51.543: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix200002293/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:56:51.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4433" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":221,"skipped":3581,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:56:51.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:56:52.183: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:56:55.231: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 12 19:56:59.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5385 to-be-attached-pod -i -c=container1' Mar 12 19:56:59.378: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:56:59.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5385" for this suite. STEP: Destroying namespace "webhook-5385-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.873 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":222,"skipped":3586,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:56:59.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9734/configmap-test-1e838e53-9676-4ef7-9ddd-ccae148a1e74 STEP: Creating a pod to test consume configMaps Mar 12 19:56:59.538: INFO: Waiting up to 5m0s for pod "pod-configmaps-f9150088-bbab-41e6-9c65-3ccdf9b6ff36" in namespace "configmap-9734" to be "success or failure" Mar 12 19:56:59.581: INFO: Pod "pod-configmaps-f9150088-bbab-41e6-9c65-3ccdf9b6ff36": Phase="Pending", Reason="", readiness=false. Elapsed: 43.811328ms Mar 12 19:57:01.599: INFO: Pod "pod-configmaps-f9150088-bbab-41e6-9c65-3ccdf9b6ff36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061773466s STEP: Saw pod success Mar 12 19:57:01.599: INFO: Pod "pod-configmaps-f9150088-bbab-41e6-9c65-3ccdf9b6ff36" satisfied condition "success or failure" Mar 12 19:57:01.602: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f9150088-bbab-41e6-9c65-3ccdf9b6ff36 container env-test: STEP: delete the pod Mar 12 19:57:01.615: INFO: Waiting for pod pod-configmaps-f9150088-bbab-41e6-9c65-3ccdf9b6ff36 to disappear Mar 12 19:57:01.620: INFO: Pod pod-configmaps-f9150088-bbab-41e6-9c65-3ccdf9b6ff36 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:01.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9734" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3594,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:01.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:57:02.147: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:57:05.180: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:05.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-634" for this suite. STEP: Destroying namespace "webhook-634-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":224,"skipped":3599,"failed":0} S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:05.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-98c3bfb2-a105-47c5-89d6-0a99fb0e1e87 STEP: Creating secret with name secret-projected-all-test-volume-29699c89-6281-4ae8-b52b-9cb94d5382d6 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 12 19:57:05.971: INFO: Waiting up to 5m0s for pod "projected-volume-36674860-2957-4b10-b3bc-aa5c57958212" in namespace "projected-7715" to be "success or failure" Mar 12 19:57:05.992: INFO: Pod "projected-volume-36674860-2957-4b10-b3bc-aa5c57958212": Phase="Pending", Reason="", readiness=false. Elapsed: 20.658499ms Mar 12 19:57:07.995: INFO: Pod "projected-volume-36674860-2957-4b10-b3bc-aa5c57958212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024080122s STEP: Saw pod success Mar 12 19:57:07.995: INFO: Pod "projected-volume-36674860-2957-4b10-b3bc-aa5c57958212" satisfied condition "success or failure" Mar 12 19:57:07.997: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-36674860-2957-4b10-b3bc-aa5c57958212 container projected-all-volume-test: STEP: delete the pod Mar 12 19:57:08.104: INFO: Waiting for pod projected-volume-36674860-2957-4b10-b3bc-aa5c57958212 to disappear Mar 12 19:57:08.135: INFO: Pod projected-volume-36674860-2957-4b10-b3bc-aa5c57958212 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:08.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7715" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3600,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:08.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 12 19:57:08.169: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:08.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8727" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":226,"skipped":3603,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:08.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-d5ed4df1-87f0-452f-90df-c11d87c4827c STEP: Creating a pod to test consume secrets Mar 12 19:57:08.283: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3796993d-d3a5-451a-861a-6dd658b1e714" in namespace "projected-6107" to be "success or failure" Mar 12 19:57:08.288: INFO: Pod "pod-projected-secrets-3796993d-d3a5-451a-861a-6dd658b1e714": Phase="Pending", Reason="", readiness=false. Elapsed: 4.690911ms Mar 12 19:57:10.291: INFO: Pod "pod-projected-secrets-3796993d-d3a5-451a-861a-6dd658b1e714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007636978s STEP: Saw pod success Mar 12 19:57:10.291: INFO: Pod "pod-projected-secrets-3796993d-d3a5-451a-861a-6dd658b1e714" satisfied condition "success or failure" Mar 12 19:57:10.293: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-3796993d-d3a5-451a-861a-6dd658b1e714 container projected-secret-volume-test: STEP: delete the pod Mar 12 19:57:10.324: INFO: Waiting for pod pod-projected-secrets-3796993d-d3a5-451a-861a-6dd658b1e714 to disappear Mar 12 19:57:10.333: INFO: Pod pod-projected-secrets-3796993d-d3a5-451a-861a-6dd658b1e714 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:10.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6107" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:10.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 12 19:57:10.412: INFO: Waiting up to 5m0s for pod "pod-2fd82b6a-8634-4957-b7d4-d693b1deee17" in namespace "emptydir-155" to be "success or failure" Mar 12 19:57:10.429: INFO: Pod "pod-2fd82b6a-8634-4957-b7d4-d693b1deee17": Phase="Pending", Reason="", readiness=false. Elapsed: 16.683094ms Mar 12 19:57:12.433: INFO: Pod "pod-2fd82b6a-8634-4957-b7d4-d693b1deee17": Phase="Running", Reason="", readiness=true. Elapsed: 2.020292386s Mar 12 19:57:14.434: INFO: Pod "pod-2fd82b6a-8634-4957-b7d4-d693b1deee17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022251333s STEP: Saw pod success Mar 12 19:57:14.435: INFO: Pod "pod-2fd82b6a-8634-4957-b7d4-d693b1deee17" satisfied condition "success or failure" Mar 12 19:57:14.436: INFO: Trying to get logs from node jerma-worker2 pod pod-2fd82b6a-8634-4957-b7d4-d693b1deee17 container test-container: STEP: delete the pod Mar 12 19:57:14.487: INFO: Waiting for pod pod-2fd82b6a-8634-4957-b7d4-d693b1deee17 to disappear Mar 12 19:57:14.494: INFO: Pod pod-2fd82b6a-8634-4957-b7d4-d693b1deee17 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:14.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-155" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3638,"failed":0} ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:14.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 12 19:57:19.080: INFO: Successfully updated pod "labelsupdated3f6134f-05ac-438c-b989-9d023ddc4473" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:21.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2496" for this suite. • [SLOW TEST:6.628 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3638,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:21.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 19:57:22.025: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 19:57:24.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719639842, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719639842, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719639842, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719639841, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 19:57:27.068: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 12 19:57:27.093: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:27.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2299" for this suite. STEP: Destroying namespace "webhook-2299-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.086 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":230,"skipped":3659,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:27.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:57:27.267: INFO: Waiting up to 5m0s for pod "busybox-user-65534-49af0128-a61b-43bc-bbb0-ab05a40546d0" in namespace "security-context-test-6818" to be "success or failure" Mar 12 19:57:27.284: INFO: Pod "busybox-user-65534-49af0128-a61b-43bc-bbb0-ab05a40546d0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.935311ms Mar 12 19:57:29.288: INFO: Pod "busybox-user-65534-49af0128-a61b-43bc-bbb0-ab05a40546d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020459412s Mar 12 19:57:29.288: INFO: Pod "busybox-user-65534-49af0128-a61b-43bc-bbb0-ab05a40546d0" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:29.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6818" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3681,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:29.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9d022c24-cc52-4ea7-9669-9bd4be367510 STEP: Creating a pod to test consume secrets Mar 12 19:57:29.426: INFO: Waiting up to 5m0s for pod "pod-secrets-d0db2fd5-a813-4ca0-a16c-ecfec0a1513b" in namespace "secrets-1001" to be "success or failure" Mar 12 19:57:29.430: INFO: Pod "pod-secrets-d0db2fd5-a813-4ca0-a16c-ecfec0a1513b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289211ms Mar 12 19:57:31.433: INFO: Pod "pod-secrets-d0db2fd5-a813-4ca0-a16c-ecfec0a1513b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007246222s Mar 12 19:57:33.437: INFO: Pod "pod-secrets-d0db2fd5-a813-4ca0-a16c-ecfec0a1513b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011090308s STEP: Saw pod success Mar 12 19:57:33.437: INFO: Pod "pod-secrets-d0db2fd5-a813-4ca0-a16c-ecfec0a1513b" satisfied condition "success or failure" Mar 12 19:57:33.439: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d0db2fd5-a813-4ca0-a16c-ecfec0a1513b container secret-volume-test: STEP: delete the pod Mar 12 19:57:33.468: INFO: Waiting for pod pod-secrets-d0db2fd5-a813-4ca0-a16c-ecfec0a1513b to disappear Mar 12 19:57:33.472: INFO: Pod pod-secrets-d0db2fd5-a813-4ca0-a16c-ecfec0a1513b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:33.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1001" for this suite. STEP: Destroying namespace "secret-namespace-3700" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3692,"failed":0} SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:33.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:57:33.557: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:37.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2432" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3695,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:37.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:43.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5516" for this suite. STEP: Destroying namespace "nsdeletetest-8967" for this suite. Mar 12 19:57:43.961: INFO: Namespace nsdeletetest-8967 was already deleted STEP: Destroying namespace "nsdeletetest-7038" for this suite. • [SLOW TEST:6.226 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":234,"skipped":3696,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:43.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6923.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6923.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 19:57:48.102: INFO: DNS probes using dns-6923/dns-test-4016e6e6-de2e-4136-a884-d715e7820a26 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:48.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6923" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":235,"skipped":3744,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:48.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0312 19:57:49.336924 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 19:57:49.336: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:49.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9738" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":236,"skipped":3759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:49.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 19:57:49.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dcf89ae5-0783-406e-b3ae-de73d830c2be" in namespace "projected-9144" to be "success or failure" Mar 12 19:57:49.445: INFO: Pod "downwardapi-volume-dcf89ae5-0783-406e-b3ae-de73d830c2be": Phase="Pending", Reason="", readiness=false. Elapsed: 24.990948ms Mar 12 19:57:51.449: INFO: Pod "downwardapi-volume-dcf89ae5-0783-406e-b3ae-de73d830c2be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028581565s Mar 12 19:57:53.452: INFO: Pod "downwardapi-volume-dcf89ae5-0783-406e-b3ae-de73d830c2be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031259586s STEP: Saw pod success Mar 12 19:57:53.452: INFO: Pod "downwardapi-volume-dcf89ae5-0783-406e-b3ae-de73d830c2be" satisfied condition "success or failure" Mar 12 19:57:53.453: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dcf89ae5-0783-406e-b3ae-de73d830c2be container client-container: STEP: delete the pod Mar 12 19:57:53.511: INFO: Waiting for pod downwardapi-volume-dcf89ae5-0783-406e-b3ae-de73d830c2be to disappear Mar 12 19:57:53.515: INFO: Pod downwardapi-volume-dcf89ae5-0783-406e-b3ae-de73d830c2be no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:53.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9144" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3786,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:53.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 12 19:57:53.573: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:56.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8027" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":238,"skipped":3791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:56.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 19:57:56.739: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-3f759038-a257-4be8-8289-b3a4530e62d9" in namespace "security-context-test-3820" to be "success or failure" Mar 12 19:57:56.741: INFO: Pod "busybox-readonly-false-3f759038-a257-4be8-8289-b3a4530e62d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.426691ms Mar 12 19:57:58.744: INFO: Pod "busybox-readonly-false-3f759038-a257-4be8-8289-b3a4530e62d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00521807s Mar 12 19:57:58.744: INFO: Pod "busybox-readonly-false-3f759038-a257-4be8-8289-b3a4530e62d9" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:57:58.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3820" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3859,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:57:58.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:58:09.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-815" for this suite. • [SLOW TEST:11.195 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":240,"skipped":3881,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:58:09.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-290670e0-f789-45ce-9535-3c5d94f53c83 STEP: Creating a pod to test consume secrets Mar 12 19:58:09.998: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a3530a42-839e-425a-8b61-baaa459da8e1" in namespace "projected-4467" to be "success or failure" Mar 12 19:58:10.000: INFO: Pod "pod-projected-secrets-a3530a42-839e-425a-8b61-baaa459da8e1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.788055ms Mar 12 19:58:12.004: INFO: Pod "pod-projected-secrets-a3530a42-839e-425a-8b61-baaa459da8e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005223001s STEP: Saw pod success Mar 12 19:58:12.004: INFO: Pod "pod-projected-secrets-a3530a42-839e-425a-8b61-baaa459da8e1" satisfied condition "success or failure" Mar 12 19:58:12.005: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-a3530a42-839e-425a-8b61-baaa459da8e1 container projected-secret-volume-test: STEP: delete the pod Mar 12 19:58:12.045: INFO: Waiting for pod pod-projected-secrets-a3530a42-839e-425a-8b61-baaa459da8e1 to disappear Mar 12 19:58:12.064: INFO: Pod pod-projected-secrets-a3530a42-839e-425a-8b61-baaa459da8e1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:58:12.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4467" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3889,"failed":0} SSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:58:12.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 12 19:58:14.671: INFO: Successfully updated pod "pod-update-activedeadlineseconds-79ad7225-5527-490d-ad42-8c3308ceb50c" Mar 12 19:58:14.671: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-79ad7225-5527-490d-ad42-8c3308ceb50c" in namespace "pods-1138" to be "terminated due to deadline exceeded" Mar 12 19:58:14.685: INFO: Pod "pod-update-activedeadlineseconds-79ad7225-5527-490d-ad42-8c3308ceb50c": Phase="Running", Reason="", readiness=true. Elapsed: 14.100524ms Mar 12 19:58:16.688: INFO: Pod "pod-update-activedeadlineseconds-79ad7225-5527-490d-ad42-8c3308ceb50c": Phase="Running", Reason="", readiness=true. Elapsed: 2.017313425s Mar 12 19:58:18.692: INFO: Pod "pod-update-activedeadlineseconds-79ad7225-5527-490d-ad42-8c3308ceb50c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.02093238s Mar 12 19:58:18.692: INFO: Pod "pod-update-activedeadlineseconds-79ad7225-5527-490d-ad42-8c3308ceb50c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:58:18.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1138" for this suite. • [SLOW TEST:6.629 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3893,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:58:18.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 12 19:58:22.824: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 19:58:22.831: INFO: Pod pod-with-prestop-exec-hook still exists Mar 12 19:58:24.831: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 19:58:24.835: INFO: Pod pod-with-prestop-exec-hook still exists Mar 12 19:58:26.831: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 19:58:26.841: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:58:26.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2915" for this suite. • [SLOW TEST:8.157 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3908,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:58:26.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:58:28.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-356" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3935,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:58:28.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2850 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 19:58:29.009: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 12 19:58:47.171: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.48 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2850 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:58:47.171: INFO: >>> kubeConfig: /root/.kube/config I0312 19:58:47.203621 6 log.go:172] (0xc00173ac60) (0xc0021da5a0) Create stream I0312 19:58:47.203661 6 log.go:172] (0xc00173ac60) (0xc0021da5a0) Stream added, broadcasting: 1 I0312 19:58:47.206173 6 log.go:172] (0xc00173ac60) Reply frame received for 1 I0312 19:58:47.206208 6 log.go:172] (0xc00173ac60) (0xc001a71ea0) Create stream I0312 19:58:47.206219 6 log.go:172] (0xc00173ac60) (0xc001a71ea0) Stream added, broadcasting: 3 I0312 19:58:47.207189 6 log.go:172] (0xc00173ac60) Reply frame received for 3 I0312 19:58:47.207237 6 log.go:172] (0xc00173ac60) (0xc0011128c0) Create stream I0312 19:58:47.207257 6 log.go:172] (0xc00173ac60) (0xc0011128c0) Stream added, broadcasting: 5 I0312 19:58:47.208304 6 log.go:172] (0xc00173ac60) Reply frame received for 5 I0312 19:58:48.278071 6 log.go:172] (0xc00173ac60) Data frame received for 3 I0312 19:58:48.278106 6 log.go:172] (0xc001a71ea0) (3) Data frame handling I0312 19:58:48.278173 6 log.go:172] (0xc001a71ea0) (3) Data frame sent I0312 19:58:48.278206 6 log.go:172] (0xc00173ac60) Data frame received for 3 I0312 19:58:48.278213 6 log.go:172] (0xc001a71ea0) (3) Data frame handling I0312 19:58:48.278274 6 log.go:172] (0xc00173ac60) Data frame received for 5 I0312 19:58:48.278300 6 log.go:172] (0xc0011128c0) (5) Data frame handling I0312 19:58:48.279260 6 log.go:172] (0xc00173ac60) Data frame received for 1 I0312 19:58:48.279286 6 log.go:172] (0xc0021da5a0) (1) Data frame handling I0312 19:58:48.279315 6 log.go:172] (0xc0021da5a0) (1) Data frame sent I0312 19:58:48.279326 6 log.go:172] (0xc00173ac60) (0xc0021da5a0) Stream removed, broadcasting: 1 I0312 19:58:48.279336 6 log.go:172] (0xc00173ac60) Go away received I0312 19:58:48.279483 6 log.go:172] (0xc00173ac60) (0xc0021da5a0) Stream removed, broadcasting: 1 I0312 19:58:48.279499 6 log.go:172] (0xc00173ac60) (0xc001a71ea0) Stream removed, broadcasting: 3 I0312 19:58:48.279517 6 log.go:172] (0xc00173ac60) (0xc0011128c0) Stream removed, broadcasting: 5 Mar 12 19:58:48.279: INFO: Found all expected endpoints: [netserver-0] Mar 12 19:58:48.281: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.45 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2850 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 19:58:48.281: INFO: >>> kubeConfig: /root/.kube/config I0312 19:58:48.303635 6 log.go:172] (0xc001c9a420) (0xc002412280) Create stream I0312 19:58:48.303656 6 log.go:172] (0xc001c9a420) (0xc002412280) Stream added, broadcasting: 1 I0312 19:58:48.306010 6 log.go:172] (0xc001c9a420) Reply frame received for 1 I0312 19:58:48.306037 6 log.go:172] (0xc001c9a420) (0xc001436aa0) Create stream I0312 19:58:48.306045 6 log.go:172] (0xc001c9a420) (0xc001436aa0) Stream added, broadcasting: 3 I0312 19:58:48.308762 6 log.go:172] (0xc001c9a420) Reply frame received for 3 I0312 19:58:48.308795 6 log.go:172] (0xc001c9a420) (0xc0024123c0) Create stream I0312 19:58:48.308802 6 log.go:172] (0xc001c9a420) (0xc0024123c0) Stream added, broadcasting: 5 I0312 19:58:48.309348 6 log.go:172] (0xc001c9a420) Reply frame received for 5 I0312 19:58:49.368060 6 log.go:172] (0xc001c9a420) Data frame received for 3 I0312 19:58:49.368082 6 log.go:172] (0xc001436aa0) (3) Data frame handling I0312 19:58:49.368103 6 log.go:172] (0xc001436aa0) (3) Data frame sent I0312 19:58:49.368420 6 log.go:172] (0xc001c9a420) Data frame received for 5 I0312 19:58:49.368446 6 log.go:172] (0xc0024123c0) (5) Data frame handling I0312 19:58:49.368985 6 log.go:172] (0xc001c9a420) Data frame received for 3 I0312 19:58:49.369007 6 log.go:172] (0xc001436aa0) (3) Data frame handling I0312 19:58:49.370481 6 log.go:172] (0xc001c9a420) Data frame received for 1 I0312 19:58:49.370511 6 log.go:172] (0xc002412280) (1) Data frame handling I0312 19:58:49.370546 6 log.go:172] (0xc002412280) (1) Data frame sent I0312 19:58:49.370604 6 log.go:172] (0xc001c9a420) (0xc002412280) Stream removed, broadcasting: 1 I0312 19:58:49.370696 6 log.go:172] (0xc001c9a420) (0xc002412280) Stream removed, broadcasting: 1 I0312 19:58:49.370718 6 log.go:172] (0xc001c9a420) (0xc001436aa0) Stream removed, broadcasting: 3 I0312 19:58:49.370744 6 log.go:172] (0xc001c9a420) Go away received I0312 19:58:49.370784 6 log.go:172] (0xc001c9a420) (0xc0024123c0) Stream removed, broadcasting: 5 Mar 12 19:58:49.370: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:58:49.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2850" for this suite. • [SLOW TEST:20.438 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3942,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:58:49.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 19:58:51.465: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 19:58:51.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-977" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3951,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 19:58:51.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-c70d996a-0bad-4a42-8a5b-71227a11c4ff STEP: Creating configMap with name cm-test-opt-upd-4da521d2-0706-4ff3-a392-c09437d6f9ea STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c70d996a-0bad-4a42-8a5b-71227a11c4ff STEP: Updating configmap cm-test-opt-upd-4da521d2-0706-4ff3-a392-c09437d6f9ea STEP: Creating configMap with name cm-test-opt-create-7a804cd4-7dea-4a98-a99f-9530d2d4dc06 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:00:04.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3655" for this suite. • [SLOW TEST:72.517 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":3982,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:00:04.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 20:00:04.061: INFO: Creating deployment "test-recreate-deployment" Mar 12 20:00:04.078: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 12 20:00:04.127: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 12 20:00:06.135: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 12 20:00:06.138: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 12 20:00:06.144: INFO: Updating deployment test-recreate-deployment Mar 12 20:00:06.144: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 12 20:00:06.365: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3167 /apis/apps/v1/namespaces/deployment-3167/deployments/test-recreate-deployment 9a3e8737-705e-4b2a-a0b7-56eafcca4111 1217336 2 2020-03-12 20:00:04 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030198a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-12 20:00:06 +0000 UTC,LastTransitionTime:2020-03-12 20:00:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-12 20:00:06 +0000 UTC,LastTransitionTime:2020-03-12 20:00:04 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 12 20:00:06.372: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3167 /apis/apps/v1/namespaces/deployment-3167/replicasets/test-recreate-deployment-5f94c574ff 8b8c2d51-f259-4189-ad9f-c66d4b7390b5 1217331 1 2020-03-12 20:00:06 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 9a3e8737-705e-4b2a-a0b7-56eafcca4111 0xc003019c27 0xc003019c28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003019c88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 20:00:06.372: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 12 20:00:06.372: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-3167 /apis/apps/v1/namespaces/deployment-3167/replicasets/test-recreate-deployment-799c574856 aaf55c54-f268-4a7c-9d3f-895f47705289 1217323 2 2020-03-12 20:00:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 9a3e8737-705e-4b2a-a0b7-56eafcca4111 0xc003019cf7 0xc003019cf8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003019d68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 20:00:06.375: INFO: Pod "test-recreate-deployment-5f94c574ff-99v55" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-99v55 test-recreate-deployment-5f94c574ff- deployment-3167 /api/v1/namespaces/deployment-3167/pods/test-recreate-deployment-5f94c574ff-99v55 c85a4a22-5185-49db-85c4-dfefba3afec3 1217335 0 2020-03-12 20:00:06 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 8b8c2d51-f259-4189-ad9f-c66d4b7390b5 0xc00395a4c7 0xc00395a4c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7g2np,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7g2np,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7g2np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 20:00:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 20:00:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 20:00:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 20:00:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 20:00:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:00:06.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3167" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":248,"skipped":4005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:00:06.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e332b775-6283-48ad-92f9-9b147fa739a8 STEP: Creating a pod to test consume secrets Mar 12 20:00:06.427: INFO: Waiting up to 5m0s for pod "pod-secrets-7e9252b2-e577-4a3a-bb4e-7ce329639537" in namespace "secrets-9000" to be "success or failure" Mar 12 20:00:06.469: INFO: Pod "pod-secrets-7e9252b2-e577-4a3a-bb4e-7ce329639537": Phase="Pending", Reason="", readiness=false. Elapsed: 41.762273ms Mar 12 20:00:08.472: INFO: Pod "pod-secrets-7e9252b2-e577-4a3a-bb4e-7ce329639537": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044978066s Mar 12 20:00:10.476: INFO: Pod "pod-secrets-7e9252b2-e577-4a3a-bb4e-7ce329639537": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048256317s STEP: Saw pod success Mar 12 20:00:10.476: INFO: Pod "pod-secrets-7e9252b2-e577-4a3a-bb4e-7ce329639537" satisfied condition "success or failure" Mar 12 20:00:10.478: INFO: Trying to get logs from node jerma-worker pod pod-secrets-7e9252b2-e577-4a3a-bb4e-7ce329639537 container secret-volume-test: STEP: delete the pod Mar 12 20:00:10.564: INFO: Waiting for pod pod-secrets-7e9252b2-e577-4a3a-bb4e-7ce329639537 to disappear Mar 12 20:00:10.571: INFO: Pod pod-secrets-7e9252b2-e577-4a3a-bb4e-7ce329639537 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:00:10.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9000" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4047,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:00:10.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-81337311-ba5e-4e6a-b9b0-58edac717510 in namespace container-probe-2265 Mar 12 20:00:14.683: INFO: Started pod busybox-81337311-ba5e-4e6a-b9b0-58edac717510 in namespace container-probe-2265 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 20:00:14.688: INFO: Initial restart count of pod busybox-81337311-ba5e-4e6a-b9b0-58edac717510 is 0 Mar 12 20:01:00.797: INFO: Restart count of pod container-probe-2265/busybox-81337311-ba5e-4e6a-b9b0-58edac717510 is now 1 (46.109005273s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:01:00.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2265" for this suite. • [SLOW TEST:50.264 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:01:00.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 12 20:01:00.891: INFO: Waiting up to 5m0s for pod "var-expansion-f7b33186-1730-447a-82c9-89d587d15a91" in namespace "var-expansion-6797" to be "success or failure" Mar 12 20:01:00.907: INFO: Pod "var-expansion-f7b33186-1730-447a-82c9-89d587d15a91": Phase="Pending", Reason="", readiness=false. Elapsed: 16.075617ms Mar 12 20:01:02.916: INFO: Pod "var-expansion-f7b33186-1730-447a-82c9-89d587d15a91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025238512s Mar 12 20:01:04.920: INFO: Pod "var-expansion-f7b33186-1730-447a-82c9-89d587d15a91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028904543s STEP: Saw pod success Mar 12 20:01:04.920: INFO: Pod "var-expansion-f7b33186-1730-447a-82c9-89d587d15a91" satisfied condition "success or failure" Mar 12 20:01:04.923: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-f7b33186-1730-447a-82c9-89d587d15a91 container dapi-container: STEP: delete the pod Mar 12 20:01:04.943: INFO: Waiting for pod var-expansion-f7b33186-1730-447a-82c9-89d587d15a91 to disappear Mar 12 20:01:04.948: INFO: Pod var-expansion-f7b33186-1730-447a-82c9-89d587d15a91 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:01:04.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6797" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4090,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:01:04.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 20:01:05.028: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:01:05.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8166" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":252,"skipped":4107,"failed":0} ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:01:05.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-fee900ce-4b4c-4f01-83c5-1196c925373b STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-fee900ce-4b4c-4f01-83c5-1196c925373b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:01:09.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4802" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:01:09.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-42e30929-0ab8-4d1e-a328-f5ce37c4a182 STEP: Creating secret with name s-test-opt-upd-fe5a1c3b-69c1-4804-809b-40ce1789683d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-42e30929-0ab8-4d1e-a328-f5ce37c4a182 STEP: Updating secret s-test-opt-upd-fe5a1c3b-69c1-4804-809b-40ce1789683d STEP: Creating secret with name s-test-opt-create-5faa8c58-674e-4aa2-b449-f1b21f530fb2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:02:42.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7108" for this suite. • [SLOW TEST:92.523 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4155,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:02:42.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 12 20:02:42.410: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-a 917aa7da-4d3b-4b40-becf-732d5b34b265 1218000 0 2020-03-12 20:02:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 12 20:02:42.410: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-a 917aa7da-4d3b-4b40-becf-732d5b34b265 1218000 0 2020-03-12 20:02:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 12 20:02:52.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-a 917aa7da-4d3b-4b40-becf-732d5b34b265 1218048 0 2020-03-12 20:02:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 12 20:02:52.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-a 917aa7da-4d3b-4b40-becf-732d5b34b265 1218048 0 2020-03-12 20:02:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 12 20:03:02.423: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-a 917aa7da-4d3b-4b40-becf-732d5b34b265 1218084 0 2020-03-12 20:02:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 12 20:03:02.424: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-a 917aa7da-4d3b-4b40-becf-732d5b34b265 1218084 0 2020-03-12 20:02:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 12 20:03:12.429: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-a 917aa7da-4d3b-4b40-becf-732d5b34b265 1218114 0 2020-03-12 20:02:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 12 20:03:12.429: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-a 917aa7da-4d3b-4b40-becf-732d5b34b265 1218114 0 2020-03-12 20:02:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 12 20:03:22.435: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-b b8c50607-d713-47b1-adce-3122bb07f741 1218144 0 2020-03-12 20:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 12 20:03:22.436: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-b b8c50607-d713-47b1-adce-3122bb07f741 1218144 0 2020-03-12 20:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 12 20:03:32.442: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-b b8c50607-d713-47b1-adce-3122bb07f741 1218174 0 2020-03-12 20:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 12 20:03:32.442: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2508 /api/v1/namespaces/watch-2508/configmaps/e2e-watch-test-configmap-b b8c50607-d713-47b1-adce-3122bb07f741 1218174 0 2020-03-12 20:03:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:03:42.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2508" for this suite. • [SLOW TEST:60.120 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":255,"skipped":4162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:03:42.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 12 20:03:45.090: INFO: Successfully updated pod "annotationupdatea84d2fbb-5867-4aa9-ae86-c98ba1531db6" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:03:47.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2101" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4192,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:03:47.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 20:03:47.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1789' Mar 12 20:03:48.996: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 20:03:48.996: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 Mar 12 20:03:51.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1789' Mar 12 20:03:51.398: INFO: stderr: "" Mar 12 20:03:51.398: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:03:51.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1789" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":257,"skipped":4193,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:03:51.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 12 20:03:51.469: INFO: Waiting up to 5m0s for pod "var-expansion-b2e0ffc9-231d-44db-ba4c-76f8fe416b03" in namespace "var-expansion-2619" to be "success or failure" Mar 12 20:03:51.474: INFO: Pod "var-expansion-b2e0ffc9-231d-44db-ba4c-76f8fe416b03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.765746ms Mar 12 20:03:53.489: INFO: Pod "var-expansion-b2e0ffc9-231d-44db-ba4c-76f8fe416b03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02062817s STEP: Saw pod success Mar 12 20:03:53.489: INFO: Pod "var-expansion-b2e0ffc9-231d-44db-ba4c-76f8fe416b03" satisfied condition "success or failure" Mar 12 20:03:53.492: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-b2e0ffc9-231d-44db-ba4c-76f8fe416b03 container dapi-container: STEP: delete the pod Mar 12 20:03:53.523: INFO: Waiting for pod var-expansion-b2e0ffc9-231d-44db-ba4c-76f8fe416b03 to disappear Mar 12 20:03:53.528: INFO: Pod var-expansion-b2e0ffc9-231d-44db-ba4c-76f8fe416b03 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:03:53.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2619" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:03:53.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 20:03:53.612: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 12 20:03:58.615: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 12 20:03:58.615: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 12 20:03:58.671: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3342 /apis/apps/v1/namespaces/deployment-3342/deployments/test-cleanup-deployment 64342a5a-818a-4b66-b7fc-641f8620be4d 1218357 1 2020-03-12 20:03:58 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000db9c38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 12 20:03:58.695: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-3342 /apis/apps/v1/namespaces/deployment-3342/replicasets/test-cleanup-deployment-55ffc6b7b6 e9a6d3e6-d1cc-4aa7-a40a-9ee90c0409ab 1218360 1 2020-03-12 20:03:58 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 64342a5a-818a-4b66-b7fc-641f8620be4d 0xc0034eb897 0xc0034eb898}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034eb908 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 20:03:58.695: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 12 20:03:58.695: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3342 /apis/apps/v1/namespaces/deployment-3342/replicasets/test-cleanup-controller 37e4aeda-8093-4a56-a518-45019a168dd5 1218359 1 2020-03-12 20:03:53 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 64342a5a-818a-4b66-b7fc-641f8620be4d 0xc0034eb7c7 0xc0034eb7c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0034eb828 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 12 20:03:58.737: INFO: Pod "test-cleanup-controller-csmz2" is available: &Pod{ObjectMeta:{test-cleanup-controller-csmz2 test-cleanup-controller- deployment-3342 /api/v1/namespaces/deployment-3342/pods/test-cleanup-controller-csmz2 201bdfdf-857a-4144-8508-b95c7c22edb4 1218327 0 2020-03-12 20:03:53 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 37e4aeda-8093-4a56-a518-45019a168dd5 0xc0034ebd57 0xc0034ebd58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gsld9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gsld9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gsld9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 20:03:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 20:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 20:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 20:03:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.50,StartTime:2020-03-12 20:03:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 20:03:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ca493df4859fc3c992785b9b065fcdd1675012ea471371f377b935f1a766bc9f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 20:03:58.737: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-5nx5k" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-5nx5k test-cleanup-deployment-55ffc6b7b6- deployment-3342 /api/v1/namespaces/deployment-3342/pods/test-cleanup-deployment-55ffc6b7b6-5nx5k 974f0b9f-6eaf-4732-868f-2eeb1fcccb86 1218364 0 2020-03-12 20:03:58 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 e9a6d3e6-d1cc-4aa7-a40a-9ee90c0409ab 0xc0034ebee7 0xc0034ebee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gsld9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gsld9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gsld9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 20:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:03:58.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3342" for this suite. • [SLOW TEST:5.228 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":259,"skipped":4278,"failed":0} SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:03:58.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-8842 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8842 to expose endpoints map[] Mar 12 20:03:58.887: INFO: successfully validated that service multi-endpoint-test in namespace services-8842 exposes endpoints map[] (23.992737ms elapsed) STEP: Creating pod pod1 in namespace services-8842 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8842 to expose endpoints map[pod1:[100]] Mar 12 20:04:00.989: INFO: successfully validated that service multi-endpoint-test in namespace services-8842 exposes endpoints map[pod1:[100]] (2.067161555s elapsed) STEP: Creating pod pod2 in namespace services-8842 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8842 to expose endpoints map[pod1:[100] pod2:[101]] Mar 12 20:04:04.062: INFO: successfully validated that service multi-endpoint-test in namespace services-8842 exposes endpoints map[pod1:[100] pod2:[101]] (3.06904536s elapsed) STEP: Deleting pod pod1 in namespace services-8842 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8842 to expose endpoints map[pod2:[101]] Mar 12 20:04:05.092: INFO: successfully validated that service multi-endpoint-test in namespace services-8842 exposes endpoints map[pod2:[101]] (1.020790225s elapsed) STEP: Deleting pod pod2 in namespace services-8842 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8842 to expose endpoints map[] Mar 12 20:04:06.106: INFO: successfully validated that service multi-endpoint-test in namespace services-8842 exposes endpoints map[] (1.010378649s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:04:06.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8842" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.394 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":260,"skipped":4281,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:04:06.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3556, will wait for the garbage collector to delete the pods Mar 12 20:04:08.354: INFO: Deleting Job.batch foo took: 3.985145ms Mar 12 20:04:08.454: INFO: Terminating Job.batch foo pods took: 100.217137ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:04:42.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3556" for this suite. • [SLOW TEST:36.237 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":261,"skipped":4300,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:04:42.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 12 20:04:42.444: INFO: >>> kubeConfig: /root/.kube/config Mar 12 20:04:45.229: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:04:55.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7689" for this suite. • [SLOW TEST:13.131 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":262,"skipped":4309,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:04:55.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2586 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-2586 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2586 Mar 12 20:04:55.641: INFO: Found 0 stateful pods, waiting for 1 Mar 12 20:05:05.658: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 12 20:05:05.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2586 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 20:05:05.851: INFO: stderr: "I0312 20:05:05.769097 3684 log.go:172] (0xc0009c13f0) (0xc000a54780) Create stream\nI0312 20:05:05.769140 3684 log.go:172] (0xc0009c13f0) (0xc000a54780) Stream added, broadcasting: 1\nI0312 20:05:05.772702 3684 log.go:172] (0xc0009c13f0) Reply frame received for 1\nI0312 20:05:05.772730 3684 log.go:172] (0xc0009c13f0) (0xc000542640) Create stream\nI0312 20:05:05.772739 3684 log.go:172] (0xc0009c13f0) (0xc000542640) Stream added, broadcasting: 3\nI0312 20:05:05.773326 3684 log.go:172] (0xc0009c13f0) Reply frame received for 3\nI0312 20:05:05.773353 3684 log.go:172] (0xc0009c13f0) (0xc000215400) Create stream\nI0312 20:05:05.773361 3684 log.go:172] (0xc0009c13f0) (0xc000215400) Stream added, broadcasting: 5\nI0312 20:05:05.773964 3684 log.go:172] (0xc0009c13f0) Reply frame received for 5\nI0312 20:05:05.824634 3684 log.go:172] (0xc0009c13f0) Data frame received for 5\nI0312 20:05:05.824660 3684 log.go:172] (0xc000215400) (5) Data frame handling\nI0312 20:05:05.824676 3684 log.go:172] (0xc000215400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 20:05:05.846791 3684 log.go:172] (0xc0009c13f0) Data frame received for 5\nI0312 20:05:05.846847 3684 log.go:172] (0xc000215400) (5) Data frame handling\nI0312 20:05:05.846870 3684 log.go:172] (0xc0009c13f0) Data frame received for 3\nI0312 20:05:05.846878 3684 log.go:172] (0xc000542640) (3) Data frame handling\nI0312 20:05:05.846907 3684 log.go:172] (0xc000542640) (3) Data frame sent\nI0312 20:05:05.847075 3684 log.go:172] (0xc0009c13f0) Data frame received for 3\nI0312 20:05:05.847094 3684 log.go:172] (0xc000542640) (3) Data frame handling\nI0312 20:05:05.848334 3684 log.go:172] (0xc0009c13f0) Data frame received for 1\nI0312 20:05:05.848347 3684 log.go:172] (0xc000a54780) (1) Data frame handling\nI0312 20:05:05.848357 3684 log.go:172] (0xc000a54780) (1) Data frame sent\nI0312 20:05:05.848365 3684 log.go:172] (0xc0009c13f0) (0xc000a54780) Stream removed, broadcasting: 1\nI0312 20:05:05.848374 3684 log.go:172] (0xc0009c13f0) Go away received\nI0312 20:05:05.848713 3684 log.go:172] (0xc0009c13f0) (0xc000a54780) Stream removed, broadcasting: 1\nI0312 20:05:05.848728 3684 log.go:172] (0xc0009c13f0) (0xc000542640) Stream removed, broadcasting: 3\nI0312 20:05:05.848735 3684 log.go:172] (0xc0009c13f0) (0xc000215400) Stream removed, broadcasting: 5\n" Mar 12 20:05:05.852: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 20:05:05.852: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 20:05:05.855: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 12 20:05:15.859: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 20:05:15.859: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 20:05:15.890: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 20:05:15.890: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC }] Mar 12 20:05:15.890: INFO: Mar 12 20:05:15.890: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 12 20:05:16.895: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98355749s Mar 12 20:05:17.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978274375s Mar 12 20:05:18.904: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974036098s Mar 12 20:05:19.908: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.969627516s Mar 12 20:05:20.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.965088614s Mar 12 20:05:21.916: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.960342818s Mar 12 20:05:22.921: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.957172563s Mar 12 20:05:23.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.952985529s Mar 12 20:05:24.929: INFO: Verifying statefulset ss doesn't scale past 3 for another 948.477882ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2586 Mar 12 20:05:25.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2586 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 20:05:26.150: INFO: stderr: "I0312 20:05:26.077764 3706 log.go:172] (0xc00098efd0) (0xc0009c05a0) Create stream\nI0312 20:05:26.077807 3706 log.go:172] (0xc00098efd0) (0xc0009c05a0) Stream added, broadcasting: 1\nI0312 20:05:26.082438 3706 log.go:172] (0xc00098efd0) Reply frame received for 1\nI0312 20:05:26.082465 3706 log.go:172] (0xc00098efd0) (0xc000618640) Create stream\nI0312 20:05:26.082472 3706 log.go:172] (0xc00098efd0) (0xc000618640) Stream added, broadcasting: 3\nI0312 20:05:26.083136 3706 log.go:172] (0xc00098efd0) Reply frame received for 3\nI0312 20:05:26.083161 3706 log.go:172] (0xc00098efd0) (0xc00077b400) Create stream\nI0312 20:05:26.083173 3706 log.go:172] (0xc00098efd0) (0xc00077b400) Stream added, broadcasting: 5\nI0312 20:05:26.083858 3706 log.go:172] (0xc00098efd0) Reply frame received for 5\nI0312 20:05:26.145611 3706 log.go:172] (0xc00098efd0) Data frame received for 5\nI0312 20:05:26.145647 3706 log.go:172] (0xc00098efd0) Data frame received for 3\nI0312 20:05:26.145673 3706 log.go:172] (0xc000618640) (3) Data frame handling\nI0312 20:05:26.145686 3706 log.go:172] (0xc000618640) (3) Data frame sent\nI0312 20:05:26.145697 3706 log.go:172] (0xc00098efd0) Data frame received for 3\nI0312 20:05:26.145712 3706 log.go:172] (0xc000618640) (3) Data frame handling\nI0312 20:05:26.145748 3706 log.go:172] (0xc00077b400) (5) Data frame handling\nI0312 20:05:26.145790 3706 log.go:172] (0xc00077b400) (5) Data frame sent\nI0312 20:05:26.145801 3706 log.go:172] (0xc00098efd0) Data frame received for 5\nI0312 20:05:26.145806 3706 log.go:172] (0xc00077b400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 20:05:26.146763 3706 log.go:172] (0xc00098efd0) Data frame received for 1\nI0312 20:05:26.146781 3706 log.go:172] (0xc0009c05a0) (1) Data frame handling\nI0312 20:05:26.146790 3706 log.go:172] (0xc0009c05a0) (1) Data frame sent\nI0312 20:05:26.146803 3706 log.go:172] (0xc00098efd0) (0xc0009c05a0) Stream removed, broadcasting: 1\nI0312 20:05:26.146835 3706 log.go:172] (0xc00098efd0) Go away received\nI0312 20:05:26.147135 3706 log.go:172] (0xc00098efd0) (0xc0009c05a0) Stream removed, broadcasting: 1\nI0312 20:05:26.147153 3706 log.go:172] (0xc00098efd0) (0xc000618640) Stream removed, broadcasting: 3\nI0312 20:05:26.147164 3706 log.go:172] (0xc00098efd0) (0xc00077b400) Stream removed, broadcasting: 5\n" Mar 12 20:05:26.150: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 20:05:26.150: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 20:05:26.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2586 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 20:05:26.325: INFO: stderr: "I0312 20:05:26.266081 3727 log.go:172] (0xc000971550) (0xc000a565a0) Create stream\nI0312 20:05:26.266150 3727 log.go:172] (0xc000971550) (0xc000a565a0) Stream added, broadcasting: 1\nI0312 20:05:26.270002 3727 log.go:172] (0xc000971550) Reply frame received for 1\nI0312 20:05:26.270036 3727 log.go:172] (0xc000971550) (0xc0005da780) Create stream\nI0312 20:05:26.270045 3727 log.go:172] (0xc000971550) (0xc0005da780) Stream added, broadcasting: 3\nI0312 20:05:26.270774 3727 log.go:172] (0xc000971550) Reply frame received for 3\nI0312 20:05:26.270803 3727 log.go:172] (0xc000971550) (0xc00077d540) Create stream\nI0312 20:05:26.270816 3727 log.go:172] (0xc000971550) (0xc00077d540) Stream added, broadcasting: 5\nI0312 20:05:26.271463 3727 log.go:172] (0xc000971550) Reply frame received for 5\nI0312 20:05:26.319960 3727 log.go:172] (0xc000971550) Data frame received for 3\nI0312 20:05:26.319983 3727 log.go:172] (0xc0005da780) (3) Data frame handling\nI0312 20:05:26.319992 3727 log.go:172] (0xc0005da780) (3) Data frame sent\nI0312 20:05:26.319998 3727 log.go:172] (0xc000971550) Data frame received for 3\nI0312 20:05:26.320003 3727 log.go:172] (0xc0005da780) (3) Data frame handling\nI0312 20:05:26.320014 3727 log.go:172] (0xc000971550) Data frame received for 5\nI0312 20:05:26.320019 3727 log.go:172] (0xc00077d540) (5) Data frame handling\nI0312 20:05:26.320026 3727 log.go:172] (0xc00077d540) (5) Data frame sent\nI0312 20:05:26.320031 3727 log.go:172] (0xc000971550) Data frame received for 5\nI0312 20:05:26.320037 3727 log.go:172] (0xc00077d540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0312 20:05:26.321259 3727 log.go:172] (0xc000971550) Data frame received for 1\nI0312 20:05:26.321288 3727 log.go:172] (0xc000a565a0) (1) Data frame handling\nI0312 20:05:26.321310 3727 log.go:172] (0xc000a565a0) (1) Data frame sent\nI0312 20:05:26.321326 3727 log.go:172] (0xc000971550) (0xc000a565a0) Stream removed, broadcasting: 1\nI0312 20:05:26.321399 3727 log.go:172] (0xc000971550) Go away received\nI0312 20:05:26.321612 3727 log.go:172] (0xc000971550) (0xc000a565a0) Stream removed, broadcasting: 1\nI0312 20:05:26.321629 3727 log.go:172] (0xc000971550) (0xc0005da780) Stream removed, broadcasting: 3\nI0312 20:05:26.321640 3727 log.go:172] (0xc000971550) (0xc00077d540) Stream removed, broadcasting: 5\n" Mar 12 20:05:26.325: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 20:05:26.325: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 20:05:26.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2586 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 20:05:26.472: INFO: stderr: "I0312 20:05:26.408669 3749 log.go:172] (0xc0009a9550) (0xc00097e780) Create stream\nI0312 20:05:26.408920 3749 log.go:172] (0xc0009a9550) (0xc00097e780) Stream added, broadcasting: 1\nI0312 20:05:26.411989 3749 log.go:172] (0xc0009a9550) Reply frame received for 1\nI0312 20:05:26.412011 3749 log.go:172] (0xc0009a9550) (0xc0006425a0) Create stream\nI0312 20:05:26.412016 3749 log.go:172] (0xc0009a9550) (0xc0006425a0) Stream added, broadcasting: 3\nI0312 20:05:26.412506 3749 log.go:172] (0xc0009a9550) Reply frame received for 3\nI0312 20:05:26.412528 3749 log.go:172] (0xc0009a9550) (0xc0004c5360) Create stream\nI0312 20:05:26.412536 3749 log.go:172] (0xc0009a9550) (0xc0004c5360) Stream added, broadcasting: 5\nI0312 20:05:26.413000 3749 log.go:172] (0xc0009a9550) Reply frame received for 5\nI0312 20:05:26.467601 3749 log.go:172] (0xc0009a9550) Data frame received for 3\nI0312 20:05:26.467618 3749 log.go:172] (0xc0006425a0) (3) Data frame handling\nI0312 20:05:26.467624 3749 log.go:172] (0xc0006425a0) (3) Data frame sent\nI0312 20:05:26.467628 3749 log.go:172] (0xc0009a9550) Data frame received for 3\nI0312 20:05:26.467632 3749 log.go:172] (0xc0006425a0) (3) Data frame handling\nI0312 20:05:26.467640 3749 log.go:172] (0xc0009a9550) Data frame received for 5\nI0312 20:05:26.467651 3749 log.go:172] (0xc0004c5360) (5) Data frame handling\nI0312 20:05:26.467657 3749 log.go:172] (0xc0004c5360) (5) Data frame sent\nI0312 20:05:26.467661 3749 log.go:172] (0xc0009a9550) Data frame received for 5\nI0312 20:05:26.467665 3749 log.go:172] (0xc0004c5360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0312 20:05:26.468647 3749 log.go:172] (0xc0009a9550) Data frame received for 1\nI0312 20:05:26.468659 3749 log.go:172] (0xc00097e780) (1) Data frame handling\nI0312 20:05:26.468667 3749 log.go:172] (0xc00097e780) (1) Data frame sent\nI0312 20:05:26.468676 3749 log.go:172] (0xc0009a9550) (0xc00097e780) Stream removed, broadcasting: 1\nI0312 20:05:26.468734 3749 log.go:172] (0xc0009a9550) Go away received\nI0312 20:05:26.468892 3749 log.go:172] (0xc0009a9550) (0xc00097e780) Stream removed, broadcasting: 1\nI0312 20:05:26.468905 3749 log.go:172] (0xc0009a9550) (0xc0006425a0) Stream removed, broadcasting: 3\nI0312 20:05:26.468910 3749 log.go:172] (0xc0009a9550) (0xc0004c5360) Stream removed, broadcasting: 5\n" Mar 12 20:05:26.472: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 20:05:26.472: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 20:05:26.474: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 12 20:05:36.478: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 20:05:36.478: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 20:05:36.478: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 12 20:05:36.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2586 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 20:05:36.674: INFO: stderr: "I0312 20:05:36.601444 3768 log.go:172] (0xc000bf2f20) (0xc000b3c3c0) Create stream\nI0312 20:05:36.601480 3768 log.go:172] (0xc000bf2f20) (0xc000b3c3c0) Stream added, broadcasting: 1\nI0312 20:05:36.603139 3768 log.go:172] (0xc000bf2f20) Reply frame received for 1\nI0312 20:05:36.603174 3768 log.go:172] (0xc000bf2f20) (0xc000be8140) Create stream\nI0312 20:05:36.603184 3768 log.go:172] (0xc000bf2f20) (0xc000be8140) Stream added, broadcasting: 3\nI0312 20:05:36.603837 3768 log.go:172] (0xc000bf2f20) Reply frame received for 3\nI0312 20:05:36.603864 3768 log.go:172] (0xc000bf2f20) (0xc000a8a140) Create stream\nI0312 20:05:36.603875 3768 log.go:172] (0xc000bf2f20) (0xc000a8a140) Stream added, broadcasting: 5\nI0312 20:05:36.604463 3768 log.go:172] (0xc000bf2f20) Reply frame received for 5\nI0312 20:05:36.669592 3768 log.go:172] (0xc000bf2f20) Data frame received for 3\nI0312 20:05:36.669628 3768 log.go:172] (0xc000be8140) (3) Data frame handling\nI0312 20:05:36.669639 3768 log.go:172] (0xc000be8140) (3) Data frame sent\nI0312 20:05:36.669649 3768 log.go:172] (0xc000bf2f20) Data frame received for 3\nI0312 20:05:36.669656 3768 log.go:172] (0xc000be8140) (3) Data frame handling\nI0312 20:05:36.669680 3768 log.go:172] (0xc000bf2f20) Data frame received for 5\nI0312 20:05:36.669691 3768 log.go:172] (0xc000a8a140) (5) Data frame handling\nI0312 20:05:36.669705 3768 log.go:172] (0xc000a8a140) (5) Data frame sent\nI0312 20:05:36.669717 3768 log.go:172] (0xc000bf2f20) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 20:05:36.669724 3768 log.go:172] (0xc000a8a140) (5) Data frame handling\nI0312 20:05:36.670602 3768 log.go:172] (0xc000bf2f20) Data frame received for 1\nI0312 20:05:36.670620 3768 log.go:172] (0xc000b3c3c0) (1) Data frame handling\nI0312 20:05:36.670631 3768 log.go:172] (0xc000b3c3c0) (1) Data frame sent\nI0312 20:05:36.670645 3768 log.go:172] (0xc000bf2f20) (0xc000b3c3c0) Stream removed, broadcasting: 1\nI0312 20:05:36.670662 3768 log.go:172] (0xc000bf2f20) Go away received\nI0312 20:05:36.670954 3768 log.go:172] (0xc000bf2f20) (0xc000b3c3c0) Stream removed, broadcasting: 1\nI0312 20:05:36.670971 3768 log.go:172] (0xc000bf2f20) (0xc000be8140) Stream removed, broadcasting: 3\nI0312 20:05:36.670977 3768 log.go:172] (0xc000bf2f20) (0xc000a8a140) Stream removed, broadcasting: 5\n" Mar 12 20:05:36.674: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 20:05:36.674: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 20:05:36.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2586 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 20:05:36.882: INFO: stderr: "I0312 20:05:36.795858 3788 log.go:172] (0xc0000f4580) (0xc000908000) Create stream\nI0312 20:05:36.795911 3788 log.go:172] (0xc0000f4580) (0xc000908000) Stream added, broadcasting: 1\nI0312 20:05:36.798355 3788 log.go:172] (0xc0000f4580) Reply frame received for 1\nI0312 20:05:36.798390 3788 log.go:172] (0xc0000f4580) (0xc0009080a0) Create stream\nI0312 20:05:36.798396 3788 log.go:172] (0xc0000f4580) (0xc0009080a0) Stream added, broadcasting: 3\nI0312 20:05:36.799110 3788 log.go:172] (0xc0000f4580) Reply frame received for 3\nI0312 20:05:36.799140 3788 log.go:172] (0xc0000f4580) (0xc0007434a0) Create stream\nI0312 20:05:36.799146 3788 log.go:172] (0xc0000f4580) (0xc0007434a0) Stream added, broadcasting: 5\nI0312 20:05:36.800012 3788 log.go:172] (0xc0000f4580) Reply frame received for 5\nI0312 20:05:36.856664 3788 log.go:172] (0xc0000f4580) Data frame received for 5\nI0312 20:05:36.856689 3788 log.go:172] (0xc0007434a0) (5) Data frame handling\nI0312 20:05:36.856704 3788 log.go:172] (0xc0007434a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 20:05:36.876840 3788 log.go:172] (0xc0000f4580) Data frame received for 3\nI0312 20:05:36.876877 3788 log.go:172] (0xc0009080a0) (3) Data frame handling\nI0312 20:05:36.876913 3788 log.go:172] (0xc0009080a0) (3) Data frame sent\nI0312 20:05:36.877036 3788 log.go:172] (0xc0000f4580) Data frame received for 3\nI0312 20:05:36.877070 3788 log.go:172] (0xc0009080a0) (3) Data frame handling\nI0312 20:05:36.877100 3788 log.go:172] (0xc0000f4580) Data frame received for 5\nI0312 20:05:36.877114 3788 log.go:172] (0xc0007434a0) (5) Data frame handling\nI0312 20:05:36.878933 3788 log.go:172] (0xc0000f4580) Data frame received for 1\nI0312 20:05:36.878957 3788 log.go:172] (0xc000908000) (1) Data frame handling\nI0312 20:05:36.878977 3788 log.go:172] (0xc000908000) (1) Data frame sent\nI0312 20:05:36.879013 3788 log.go:172] (0xc0000f4580) (0xc000908000) Stream removed, broadcasting: 1\nI0312 20:05:36.879033 3788 log.go:172] (0xc0000f4580) Go away received\nI0312 20:05:36.879290 3788 log.go:172] (0xc0000f4580) (0xc000908000) Stream removed, broadcasting: 1\nI0312 20:05:36.879309 3788 log.go:172] (0xc0000f4580) (0xc0009080a0) Stream removed, broadcasting: 3\nI0312 20:05:36.879316 3788 log.go:172] (0xc0000f4580) (0xc0007434a0) Stream removed, broadcasting: 5\n" Mar 12 20:05:36.882: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 20:05:36.882: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 20:05:36.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2586 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 20:05:37.073: INFO: stderr: "I0312 20:05:36.984236 3807 log.go:172] (0xc000bd6370) (0xc000663b80) Create stream\nI0312 20:05:36.984270 3807 log.go:172] (0xc000bd6370) (0xc000663b80) Stream added, broadcasting: 1\nI0312 20:05:36.986200 3807 log.go:172] (0xc000bd6370) Reply frame received for 1\nI0312 20:05:36.986250 3807 log.go:172] (0xc000bd6370) (0xc000663c20) Create stream\nI0312 20:05:36.986258 3807 log.go:172] (0xc000bd6370) (0xc000663c20) Stream added, broadcasting: 3\nI0312 20:05:36.986896 3807 log.go:172] (0xc000bd6370) Reply frame received for 3\nI0312 20:05:36.986920 3807 log.go:172] (0xc000bd6370) (0xc000663cc0) Create stream\nI0312 20:05:36.986926 3807 log.go:172] (0xc000bd6370) (0xc000663cc0) Stream added, broadcasting: 5\nI0312 20:05:36.987460 3807 log.go:172] (0xc000bd6370) Reply frame received for 5\nI0312 20:05:37.040407 3807 log.go:172] (0xc000bd6370) Data frame received for 5\nI0312 20:05:37.040431 3807 log.go:172] (0xc000663cc0) (5) Data frame handling\nI0312 20:05:37.040448 3807 log.go:172] (0xc000663cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 20:05:37.067375 3807 log.go:172] (0xc000bd6370) Data frame received for 3\nI0312 20:05:37.067461 3807 log.go:172] (0xc000663c20) (3) Data frame handling\nI0312 20:05:37.067483 3807 log.go:172] (0xc000663c20) (3) Data frame sent\nI0312 20:05:37.067559 3807 log.go:172] (0xc000bd6370) Data frame received for 3\nI0312 20:05:37.067588 3807 log.go:172] (0xc000663c20) (3) Data frame handling\nI0312 20:05:37.067638 3807 log.go:172] (0xc000bd6370) Data frame received for 5\nI0312 20:05:37.067659 3807 log.go:172] (0xc000663cc0) (5) Data frame handling\nI0312 20:05:37.069191 3807 log.go:172] (0xc000bd6370) Data frame received for 1\nI0312 20:05:37.069214 3807 log.go:172] (0xc000663b80) (1) Data frame handling\nI0312 20:05:37.069228 3807 log.go:172] (0xc000663b80) (1) Data frame sent\nI0312 20:05:37.069242 3807 log.go:172] (0xc000bd6370) (0xc000663b80) Stream removed, broadcasting: 1\nI0312 20:05:37.069299 3807 log.go:172] (0xc000bd6370) Go away received\nI0312 20:05:37.069593 3807 log.go:172] (0xc000bd6370) (0xc000663b80) Stream removed, broadcasting: 1\nI0312 20:05:37.069609 3807 log.go:172] (0xc000bd6370) (0xc000663c20) Stream removed, broadcasting: 3\nI0312 20:05:37.069617 3807 log.go:172] (0xc000bd6370) (0xc000663cc0) Stream removed, broadcasting: 5\n" Mar 12 20:05:37.073: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 20:05:37.073: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 20:05:37.073: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 20:05:37.076: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 12 20:05:47.089: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 20:05:47.089: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 12 20:05:47.089: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 12 20:05:47.124: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 20:05:47.124: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC }] Mar 12 20:05:47.124: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:47.124: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:47.125: INFO: Mar 12 20:05:47.125: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 20:05:48.129: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 20:05:48.129: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC }] Mar 12 20:05:48.129: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:48.129: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:48.129: INFO: Mar 12 20:05:48.129: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 20:05:49.133: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 20:05:49.134: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC }] Mar 12 20:05:49.134: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:49.134: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:49.134: INFO: Mar 12 20:05:49.134: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 20:05:50.137: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 20:05:50.137: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC }] Mar 12 20:05:50.137: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:50.137: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:50.137: INFO: Mar 12 20:05:50.137: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 20:05:51.142: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 20:05:51.142: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC }] Mar 12 20:05:51.142: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:51.142: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:51.142: INFO: Mar 12 20:05:51.142: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 20:05:52.150: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 20:05:52.150: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC }] Mar 12 20:05:52.150: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:52.150: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:52.150: INFO: Mar 12 20:05:52.150: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 20:05:53.153: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 20:05:53.153: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC }] Mar 12 20:05:53.153: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:53.153: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:53.153: INFO: Mar 12 20:05:53.153: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 20:05:54.157: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 20:05:54.157: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC }] Mar 12 20:05:54.158: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:54.158: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:54.158: INFO: Mar 12 20:05:54.158: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 20:05:55.162: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 20:05:55.162: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:04:55 +0000 UTC }] Mar 12 20:05:55.162: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:55.162: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 20:05:15 +0000 UTC }] Mar 12 20:05:55.162: INFO: Mar 12 20:05:55.162: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 20:05:56.165: INFO: Verifying statefulset ss doesn't scale past 0 for another 935.8528ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2586 Mar 12 20:05:57.168: INFO: Scaling statefulset ss to 0 Mar 12 20:05:57.176: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 20:05:57.178: INFO: Deleting all statefulset in ns statefulset-2586 Mar 12 20:05:57.181: INFO: Scaling statefulset ss to 0 Mar 12 20:05:57.188: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 20:05:57.190: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:05:57.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2586" for this suite. • [SLOW TEST:61.711 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":263,"skipped":4327,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:05:57.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-14319c39-ecd2-4e06-8f10-f5889b93b9f7 STEP: Creating a pod to test consume configMaps Mar 12 20:05:57.311: INFO: Waiting up to 5m0s for pod "pod-configmaps-84c4db96-0cc6-4bab-94d6-bcc28da98b31" in namespace "configmap-6326" to be "success or failure" Mar 12 20:05:57.321: INFO: Pod "pod-configmaps-84c4db96-0cc6-4bab-94d6-bcc28da98b31": Phase="Pending", Reason="", readiness=false. Elapsed: 9.986343ms Mar 12 20:05:59.325: INFO: Pod "pod-configmaps-84c4db96-0cc6-4bab-94d6-bcc28da98b31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01376315s Mar 12 20:06:01.328: INFO: Pod "pod-configmaps-84c4db96-0cc6-4bab-94d6-bcc28da98b31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016948523s STEP: Saw pod success Mar 12 20:06:01.328: INFO: Pod "pod-configmaps-84c4db96-0cc6-4bab-94d6-bcc28da98b31" satisfied condition "success or failure" Mar 12 20:06:01.330: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-84c4db96-0cc6-4bab-94d6-bcc28da98b31 container configmap-volume-test: STEP: delete the pod Mar 12 20:06:01.368: INFO: Waiting for pod pod-configmaps-84c4db96-0cc6-4bab-94d6-bcc28da98b31 to disappear Mar 12 20:06:01.376: INFO: Pod pod-configmaps-84c4db96-0cc6-4bab-94d6-bcc28da98b31 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:01.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6326" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4334,"failed":0} SSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:01.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:01.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2300" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":265,"skipped":4337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:01.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 20:06:01.501: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.723557ms) Mar 12 20:06:01.504: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.876346ms) Mar 12 20:06:01.507: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.235039ms) Mar 12 20:06:01.510: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.851324ms) Mar 12 20:06:01.513: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.730537ms) Mar 12 20:06:01.516: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.320131ms) Mar 12 20:06:01.519: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.018842ms) Mar 12 20:06:01.522: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.10926ms) Mar 12 20:06:01.525: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.645465ms) Mar 12 20:06:01.528: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.602153ms) Mar 12 20:06:01.530: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.832322ms) Mar 12 20:06:01.533: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.518777ms) Mar 12 20:06:01.536: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.565322ms) Mar 12 20:06:01.551: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 15.279756ms) Mar 12 20:06:01.554: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.773199ms) Mar 12 20:06:01.556: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.743564ms) Mar 12 20:06:01.559: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.234641ms) Mar 12 20:06:01.561: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.266161ms) Mar 12 20:06:01.567: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.515927ms) Mar 12 20:06:01.569: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.409072ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:01.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9439" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":266,"skipped":4377,"failed":0} SSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:01.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 20:06:01.629: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e43ee00f-20f2-4659-bcaa-8a5a4603309b" in namespace "security-context-test-9177" to be "success or failure" Mar 12 20:06:01.633: INFO: Pod "alpine-nnp-false-e43ee00f-20f2-4659-bcaa-8a5a4603309b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.30594ms Mar 12 20:06:03.636: INFO: Pod "alpine-nnp-false-e43ee00f-20f2-4659-bcaa-8a5a4603309b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006271321s Mar 12 20:06:03.636: INFO: Pod "alpine-nnp-false-e43ee00f-20f2-4659-bcaa-8a5a4603309b" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:03.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9177" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4380,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:03.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 20:06:03.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df5b4468-d743-434b-b969-f559ed171d27" in namespace "projected-1307" to be "success or failure" Mar 12 20:06:03.722: INFO: Pod "downwardapi-volume-df5b4468-d743-434b-b969-f559ed171d27": Phase="Pending", Reason="", readiness=false. Elapsed: 21.801523ms Mar 12 20:06:05.724: INFO: Pod "downwardapi-volume-df5b4468-d743-434b-b969-f559ed171d27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02455039s STEP: Saw pod success Mar 12 20:06:05.724: INFO: Pod "downwardapi-volume-df5b4468-d743-434b-b969-f559ed171d27" satisfied condition "success or failure" Mar 12 20:06:05.726: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-df5b4468-d743-434b-b969-f559ed171d27 container client-container: STEP: delete the pod Mar 12 20:06:05.741: INFO: Waiting for pod downwardapi-volume-df5b4468-d743-434b-b969-f559ed171d27 to disappear Mar 12 20:06:05.762: INFO: Pod downwardapi-volume-df5b4468-d743-434b-b969-f559ed171d27 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:05.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1307" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:05.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 20:06:06.297: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 20:06:08.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719640366, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719640366, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719640366, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719640366, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 20:06:11.366: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:23.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8253" for this suite. STEP: Destroying namespace "webhook-8253-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.862 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":269,"skipped":4421,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:23.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 12 20:06:25.715: INFO: Pod pod-hostip-e4cb994f-da89-4fcd-aa6a-351da422c589 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:25.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2076" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4440,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:25.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-af51cc57-d00d-4bee-bffc-390a440727d1 STEP: Creating a pod to test consume configMaps Mar 12 20:06:25.821: INFO: Waiting up to 5m0s for pod "pod-configmaps-f5b7a9b2-f612-4c88-a557-17acd959b0e7" in namespace "configmap-6824" to be "success or failure" Mar 12 20:06:25.838: INFO: Pod "pod-configmaps-f5b7a9b2-f612-4c88-a557-17acd959b0e7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.385784ms Mar 12 20:06:27.842: INFO: Pod "pod-configmaps-f5b7a9b2-f612-4c88-a557-17acd959b0e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020440192s STEP: Saw pod success Mar 12 20:06:27.842: INFO: Pod "pod-configmaps-f5b7a9b2-f612-4c88-a557-17acd959b0e7" satisfied condition "success or failure" Mar 12 20:06:27.845: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f5b7a9b2-f612-4c88-a557-17acd959b0e7 container configmap-volume-test: STEP: delete the pod Mar 12 20:06:27.877: INFO: Waiting for pod pod-configmaps-f5b7a9b2-f612-4c88-a557-17acd959b0e7 to disappear Mar 12 20:06:27.880: INFO: Pod pod-configmaps-f5b7a9b2-f612-4c88-a557-17acd959b0e7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:27.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6824" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4448,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:27.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 12 20:06:27.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-473' Mar 12 20:06:28.274: INFO: stderr: "" Mar 12 20:06:28.274: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 20:06:29.299: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 20:06:29.299: INFO: Found 0 / 1 Mar 12 20:06:30.278: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 20:06:30.278: INFO: Found 0 / 1 Mar 12 20:06:31.278: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 20:06:31.278: INFO: Found 1 / 1 Mar 12 20:06:31.278: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 12 20:06:31.280: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 20:06:31.280: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 20:06:31.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-l2rs7 --namespace=kubectl-473 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 12 20:06:31.411: INFO: stderr: "" Mar 12 20:06:31.412: INFO: stdout: "pod/agnhost-master-l2rs7 patched\n" STEP: checking annotations Mar 12 20:06:31.497: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 20:06:31.497: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:31.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-473" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":272,"skipped":4469,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:31.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 12 20:06:31.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 12 20:06:31.749: INFO: stderr: "" Mar 12 20:06:31.749: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:31.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9235" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":273,"skipped":4482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:31.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-166 STEP: creating replication controller nodeport-test in namespace services-166 I0312 20:06:31.876345 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-166, replica count: 2 I0312 20:06:34.926755 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 20:06:34.926: INFO: Creating new exec pod Mar 12 20:06:37.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-166 execpod7plfh -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 12 20:06:38.189: INFO: stderr: "I0312 20:06:38.077950 3886 log.go:172] (0xc000a23760) (0xc000a0c280) Create stream\nI0312 20:06:38.078011 3886 log.go:172] (0xc000a23760) (0xc000a0c280) Stream added, broadcasting: 1\nI0312 20:06:38.081135 3886 log.go:172] (0xc000a23760) Reply frame received for 1\nI0312 20:06:38.081170 3886 log.go:172] (0xc000a23760) (0xc0006985a0) Create stream\nI0312 20:06:38.081179 3886 log.go:172] (0xc000a23760) (0xc0006985a0) Stream added, broadcasting: 3\nI0312 20:06:38.081923 3886 log.go:172] (0xc000a23760) Reply frame received for 3\nI0312 20:06:38.081957 3886 log.go:172] (0xc000a23760) (0xc000567360) Create stream\nI0312 20:06:38.081970 3886 log.go:172] (0xc000a23760) (0xc000567360) Stream added, broadcasting: 5\nI0312 20:06:38.082629 3886 log.go:172] (0xc000a23760) Reply frame received for 5\nI0312 20:06:38.183104 3886 log.go:172] (0xc000a23760) Data frame received for 5\nI0312 20:06:38.183126 3886 log.go:172] (0xc000567360) (5) Data frame handling\nI0312 20:06:38.183140 3886 log.go:172] (0xc000567360) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0312 20:06:38.184482 3886 log.go:172] (0xc000a23760) Data frame received for 3\nI0312 20:06:38.184495 3886 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0312 20:06:38.184506 3886 log.go:172] (0xc000a23760) Data frame received for 5\nI0312 20:06:38.184510 3886 log.go:172] (0xc000567360) (5) Data frame handling\nI0312 20:06:38.184517 3886 log.go:172] (0xc000567360) (5) Data frame sent\nI0312 20:06:38.184524 3886 log.go:172] (0xc000a23760) Data frame received for 5\nI0312 20:06:38.184528 3886 log.go:172] (0xc000567360) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0312 20:06:38.185229 3886 log.go:172] (0xc000a23760) Data frame received for 1\nI0312 20:06:38.185241 3886 log.go:172] (0xc000a0c280) (1) Data frame handling\nI0312 20:06:38.185248 3886 log.go:172] (0xc000a0c280) (1) Data frame sent\nI0312 20:06:38.185255 3886 log.go:172] (0xc000a23760) (0xc000a0c280) Stream removed, broadcasting: 1\nI0312 20:06:38.185275 3886 log.go:172] (0xc000a23760) Go away received\nI0312 20:06:38.185437 3886 log.go:172] (0xc000a23760) (0xc000a0c280) Stream removed, broadcasting: 1\nI0312 20:06:38.185446 3886 log.go:172] (0xc000a23760) (0xc0006985a0) Stream removed, broadcasting: 3\nI0312 20:06:38.185453 3886 log.go:172] (0xc000a23760) (0xc000567360) Stream removed, broadcasting: 5\n" Mar 12 20:06:38.189: INFO: stdout: "" Mar 12 20:06:38.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-166 execpod7plfh -- /bin/sh -x -c nc -zv -t -w 2 10.97.148.146 80' Mar 12 20:06:38.344: INFO: stderr: "I0312 20:06:38.268295 3905 log.go:172] (0xc000c1cd10) (0xc000c06320) Create stream\nI0312 20:06:38.268323 3905 log.go:172] (0xc000c1cd10) (0xc000c06320) Stream added, broadcasting: 1\nI0312 20:06:38.269685 3905 log.go:172] (0xc000c1cd10) Reply frame received for 1\nI0312 20:06:38.269703 3905 log.go:172] (0xc000c1cd10) (0xc00062fd60) Create stream\nI0312 20:06:38.269708 3905 log.go:172] (0xc000c1cd10) (0xc00062fd60) Stream added, broadcasting: 3\nI0312 20:06:38.270185 3905 log.go:172] (0xc000c1cd10) Reply frame received for 3\nI0312 20:06:38.270214 3905 log.go:172] (0xc000c1cd10) (0xc00062fe00) Create stream\nI0312 20:06:38.270222 3905 log.go:172] (0xc000c1cd10) (0xc00062fe00) Stream added, broadcasting: 5\nI0312 20:06:38.270668 3905 log.go:172] (0xc000c1cd10) Reply frame received for 5\nI0312 20:06:38.340786 3905 log.go:172] (0xc000c1cd10) Data frame received for 3\nI0312 20:06:38.340813 3905 log.go:172] (0xc00062fd60) (3) Data frame handling\nI0312 20:06:38.340831 3905 log.go:172] (0xc000c1cd10) Data frame received for 5\nI0312 20:06:38.340836 3905 log.go:172] (0xc00062fe00) (5) Data frame handling\nI0312 20:06:38.340842 3905 log.go:172] (0xc00062fe00) (5) Data frame sent\nI0312 20:06:38.340848 3905 log.go:172] (0xc000c1cd10) Data frame received for 5\nI0312 20:06:38.340852 3905 log.go:172] (0xc00062fe00) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.148.146 80\nConnection to 10.97.148.146 80 port [tcp/http] succeeded!\nI0312 20:06:38.341660 3905 log.go:172] (0xc000c1cd10) Data frame received for 1\nI0312 20:06:38.341674 3905 log.go:172] (0xc000c06320) (1) Data frame handling\nI0312 20:06:38.341681 3905 log.go:172] (0xc000c06320) (1) Data frame sent\nI0312 20:06:38.341689 3905 log.go:172] (0xc000c1cd10) (0xc000c06320) Stream removed, broadcasting: 1\nI0312 20:06:38.341721 3905 log.go:172] (0xc000c1cd10) Go away received\nI0312 20:06:38.341939 3905 log.go:172] (0xc000c1cd10) (0xc000c06320) Stream removed, broadcasting: 1\nI0312 20:06:38.341950 3905 log.go:172] (0xc000c1cd10) (0xc00062fd60) Stream removed, broadcasting: 3\nI0312 20:06:38.341956 3905 log.go:172] (0xc000c1cd10) (0xc00062fe00) Stream removed, broadcasting: 5\n" Mar 12 20:06:38.344: INFO: stdout: "" Mar 12 20:06:38.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-166 execpod7plfh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 32305' Mar 12 20:06:38.529: INFO: stderr: "I0312 20:06:38.449042 3925 log.go:172] (0xc0009ca9a0) (0xc00098c000) Create stream\nI0312 20:06:38.449082 3925 log.go:172] (0xc0009ca9a0) (0xc00098c000) Stream added, broadcasting: 1\nI0312 20:06:38.450857 3925 log.go:172] (0xc0009ca9a0) Reply frame received for 1\nI0312 20:06:38.450887 3925 log.go:172] (0xc0009ca9a0) (0xc000920000) Create stream\nI0312 20:06:38.450901 3925 log.go:172] (0xc0009ca9a0) (0xc000920000) Stream added, broadcasting: 3\nI0312 20:06:38.451703 3925 log.go:172] (0xc0009ca9a0) Reply frame received for 3\nI0312 20:06:38.451743 3925 log.go:172] (0xc0009ca9a0) (0xc0009200a0) Create stream\nI0312 20:06:38.451750 3925 log.go:172] (0xc0009ca9a0) (0xc0009200a0) Stream added, broadcasting: 5\nI0312 20:06:38.452444 3925 log.go:172] (0xc0009ca9a0) Reply frame received for 5\nI0312 20:06:38.524018 3925 log.go:172] (0xc0009ca9a0) Data frame received for 3\nI0312 20:06:38.524052 3925 log.go:172] (0xc000920000) (3) Data frame handling\nI0312 20:06:38.524076 3925 log.go:172] (0xc0009ca9a0) Data frame received for 5\nI0312 20:06:38.524086 3925 log.go:172] (0xc0009200a0) (5) Data frame handling\nI0312 20:06:38.524096 3925 log.go:172] (0xc0009200a0) (5) Data frame sent\nI0312 20:06:38.524111 3925 log.go:172] (0xc0009ca9a0) Data frame received for 5\nI0312 20:06:38.524122 3925 log.go:172] (0xc0009200a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.4 32305\nConnection to 172.17.0.4 32305 port [tcp/32305] succeeded!\nI0312 20:06:38.525159 3925 log.go:172] (0xc0009ca9a0) Data frame received for 1\nI0312 20:06:38.525187 3925 log.go:172] (0xc00098c000) (1) Data frame handling\nI0312 20:06:38.525200 3925 log.go:172] (0xc00098c000) (1) Data frame sent\nI0312 20:06:38.525211 3925 log.go:172] (0xc0009ca9a0) (0xc00098c000) Stream removed, broadcasting: 1\nI0312 20:06:38.525220 3925 log.go:172] (0xc0009ca9a0) Go away received\nI0312 20:06:38.525569 3925 log.go:172] (0xc0009ca9a0) (0xc00098c000) Stream removed, broadcasting: 1\nI0312 20:06:38.525582 3925 log.go:172] (0xc0009ca9a0) (0xc000920000) Stream removed, broadcasting: 3\nI0312 20:06:38.525588 3925 log.go:172] (0xc0009ca9a0) (0xc0009200a0) Stream removed, broadcasting: 5\n" Mar 12 20:06:38.529: INFO: stdout: "" Mar 12 20:06:38.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-166 execpod7plfh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 32305' Mar 12 20:06:38.731: INFO: stderr: "I0312 20:06:38.665217 3946 log.go:172] (0xc0001051e0) (0xc000984000) Create stream\nI0312 20:06:38.665257 3946 log.go:172] (0xc0001051e0) (0xc000984000) Stream added, broadcasting: 1\nI0312 20:06:38.666773 3946 log.go:172] (0xc0001051e0) Reply frame received for 1\nI0312 20:06:38.666795 3946 log.go:172] (0xc0001051e0) (0xc00091a000) Create stream\nI0312 20:06:38.666803 3946 log.go:172] (0xc0001051e0) (0xc00091a000) Stream added, broadcasting: 3\nI0312 20:06:38.667320 3946 log.go:172] (0xc0001051e0) Reply frame received for 3\nI0312 20:06:38.667339 3946 log.go:172] (0xc0001051e0) (0xc00091a0a0) Create stream\nI0312 20:06:38.667346 3946 log.go:172] (0xc0001051e0) (0xc00091a0a0) Stream added, broadcasting: 5\nI0312 20:06:38.667868 3946 log.go:172] (0xc0001051e0) Reply frame received for 5\nI0312 20:06:38.726952 3946 log.go:172] (0xc0001051e0) Data frame received for 5\nI0312 20:06:38.727023 3946 log.go:172] (0xc00091a0a0) (5) Data frame handling\nI0312 20:06:38.727043 3946 log.go:172] (0xc00091a0a0) (5) Data frame sent\nI0312 20:06:38.727053 3946 log.go:172] (0xc0001051e0) Data frame received for 5\nI0312 20:06:38.727061 3946 log.go:172] (0xc00091a0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.5 32305\nConnection to 172.17.0.5 32305 port [tcp/32305] succeeded!\nI0312 20:06:38.727084 3946 log.go:172] (0xc0001051e0) Data frame received for 3\nI0312 20:06:38.727098 3946 log.go:172] (0xc00091a000) (3) Data frame handling\nI0312 20:06:38.728156 3946 log.go:172] (0xc0001051e0) Data frame received for 1\nI0312 20:06:38.728173 3946 log.go:172] (0xc000984000) (1) Data frame handling\nI0312 20:06:38.728183 3946 log.go:172] (0xc000984000) (1) Data frame sent\nI0312 20:06:38.728194 3946 log.go:172] (0xc0001051e0) (0xc000984000) Stream removed, broadcasting: 1\nI0312 20:06:38.728209 3946 log.go:172] (0xc0001051e0) Go away received\nI0312 20:06:38.728460 3946 log.go:172] (0xc0001051e0) (0xc000984000) Stream removed, broadcasting: 1\nI0312 20:06:38.728472 3946 log.go:172] (0xc0001051e0) (0xc00091a000) Stream removed, broadcasting: 3\nI0312 20:06:38.728477 3946 log.go:172] (0xc0001051e0) (0xc00091a0a0) Stream removed, broadcasting: 5\n" Mar 12 20:06:38.731: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:38.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-166" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:6.981 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":274,"skipped":4505,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:38.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 20:06:38.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e7be41c-9e0e-429d-8f62-3fa2546a73b7" in namespace "projected-8677" to be "success or failure" Mar 12 20:06:38.826: INFO: Pod "downwardapi-volume-8e7be41c-9e0e-429d-8f62-3fa2546a73b7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.947593ms Mar 12 20:06:40.830: INFO: Pod "downwardapi-volume-8e7be41c-9e0e-429d-8f62-3fa2546a73b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021712239s STEP: Saw pod success Mar 12 20:06:40.830: INFO: Pod "downwardapi-volume-8e7be41c-9e0e-429d-8f62-3fa2546a73b7" satisfied condition "success or failure" Mar 12 20:06:40.832: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8e7be41c-9e0e-429d-8f62-3fa2546a73b7 container client-container: STEP: delete the pod Mar 12 20:06:40.852: INFO: Waiting for pod downwardapi-volume-8e7be41c-9e0e-429d-8f62-3fa2546a73b7 to disappear Mar 12 20:06:40.856: INFO: Pod downwardapi-volume-8e7be41c-9e0e-429d-8f62-3fa2546a73b7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:40.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8677" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4515,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:40.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 12 20:06:40.912: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 12 20:06:41.840: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 12 20:06:44.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719640401, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719640401, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719640401, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719640401, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 20:06:47.046: INFO: Waited 612.761771ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:47.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8440" for this suite. • [SLOW TEST:6.771 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":276,"skipped":4520,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:47.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-4d54212a-9615-41cb-959e-3a5c494bb4da [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:06:47.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7932" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":277,"skipped":4536,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 20:06:47.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8387 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-8387 Mar 12 20:06:47.784: INFO: Found 0 stateful pods, waiting for 1 Mar 12 20:06:57.789: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 20:06:57.839: INFO: Deleting all statefulset in ns statefulset-8387 Mar 12 20:06:57.846: INFO: Scaling statefulset ss to 0 Mar 12 20:07:17.933: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 20:07:17.936: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 20:07:17.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8387" for this suite. • [SLOW TEST:30.285 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":278,"skipped":4540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSMar 12 20:07:17.965: INFO: Running AfterSuite actions on all nodes Mar 12 20:07:17.965: INFO: Running AfterSuite actions on node 1 Mar 12 20:07:17.965: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0} Ran 278 of 4843 Specs in 3814.703 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS