I0322 23:36:02.681100 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0322 23:36:02.681327 7 e2e.go:124] Starting e2e run "a489bb26-b82e-42d2-9897-d0a9e1f495cd" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584920161 - Will randomize all specs Will run 275 of 4992 specs Mar 22 23:36:02.734: INFO: >>> kubeConfig: /root/.kube/config Mar 22 23:36:02.737: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 22 23:36:02.761: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 22 23:36:02.796: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 22 23:36:02.796: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 22 23:36:02.796: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 22 23:36:02.805: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 22 23:36:02.805: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 22 23:36:02.805: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Mar 22 23:36:02.806: INFO: kube-apiserver version: v1.17.0 Mar 22 23:36:02.806: INFO: >>> kubeConfig: /root/.kube/config Mar 22 23:36:02.813: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:36:02.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Mar 22 23:36:02.892: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 22 23:36:02.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56a93232-4044-4187-974d-28d75524acb7" in namespace "projected-7158" to be "Succeeded or Failed" Mar 22 23:36:02.905: INFO: Pod "downwardapi-volume-56a93232-4044-4187-974d-28d75524acb7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.256751ms Mar 22 23:36:04.934: INFO: Pod "downwardapi-volume-56a93232-4044-4187-974d-28d75524acb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03232234s Mar 22 23:36:06.939: INFO: Pod "downwardapi-volume-56a93232-4044-4187-974d-28d75524acb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036702751s STEP: Saw pod success Mar 22 23:36:06.939: INFO: Pod "downwardapi-volume-56a93232-4044-4187-974d-28d75524acb7" satisfied condition "Succeeded or Failed" Mar 22 23:36:06.942: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-56a93232-4044-4187-974d-28d75524acb7 container client-container: STEP: delete the pod Mar 22 23:36:06.973: INFO: Waiting for pod downwardapi-volume-56a93232-4044-4187-974d-28d75524acb7 to disappear Mar 22 23:36:07.030: INFO: Pod downwardapi-volume-56a93232-4044-4187-974d-28d75524acb7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:36:07.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7158" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:36:07.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 22 23:36:07.157: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e727a31-7316-4135-b2f7-ef7387ba0108" in namespace "downward-api-1181" to be "Succeeded or Failed" Mar 22 23:36:07.159: INFO: Pod "downwardapi-volume-8e727a31-7316-4135-b2f7-ef7387ba0108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143498ms Mar 22 23:36:09.162: INFO: Pod "downwardapi-volume-8e727a31-7316-4135-b2f7-ef7387ba0108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005155864s Mar 22 23:36:11.166: INFO: Pod "downwardapi-volume-8e727a31-7316-4135-b2f7-ef7387ba0108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009058391s STEP: Saw pod success Mar 22 23:36:11.166: INFO: Pod "downwardapi-volume-8e727a31-7316-4135-b2f7-ef7387ba0108" satisfied condition "Succeeded or Failed" Mar 22 23:36:11.169: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8e727a31-7316-4135-b2f7-ef7387ba0108 container client-container: STEP: delete the pod Mar 22 23:36:11.200: INFO: Waiting for pod downwardapi-volume-8e727a31-7316-4135-b2f7-ef7387ba0108 to disappear Mar 22 23:36:11.212: INFO: Pod downwardapi-volume-8e727a31-7316-4135-b2f7-ef7387ba0108 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:36:11.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1181" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":68,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:36:11.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:36:25.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1756" for this suite. • [SLOW TEST:14.144 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":3,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:36:25.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 22 23:36:25.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1319' Mar 22 23:36:28.442: INFO: stderr: "" Mar 22 23:36:28.442: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 22 23:36:28.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1319' Mar 22 23:36:28.555: INFO: stderr: "" Mar 22 23:36:28.555: INFO: stdout: "update-demo-nautilus-kwdl5 update-demo-nautilus-pwhqk " Mar 22 23:36:28.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kwdl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1319' Mar 22 23:36:28.649: INFO: stderr: "" Mar 22 23:36:28.649: INFO: stdout: "" Mar 22 23:36:28.649: INFO: update-demo-nautilus-kwdl5 is created but not running Mar 22 23:36:33.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1319' Mar 22 23:36:33.747: INFO: stderr: "" Mar 22 23:36:33.747: INFO: stdout: "update-demo-nautilus-kwdl5 update-demo-nautilus-pwhqk " Mar 22 23:36:33.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kwdl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1319' Mar 22 23:36:33.838: INFO: stderr: "" Mar 22 23:36:33.838: INFO: stdout: "true" Mar 22 23:36:33.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kwdl5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1319' Mar 22 23:36:33.931: INFO: stderr: "" Mar 22 23:36:33.931: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 23:36:33.931: INFO: validating pod update-demo-nautilus-kwdl5 Mar 22 23:36:33.935: INFO: got data: { "image": "nautilus.jpg" } Mar 22 23:36:33.935: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 23:36:33.936: INFO: update-demo-nautilus-kwdl5 is verified up and running Mar 22 23:36:33.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pwhqk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1319' Mar 22 23:36:34.030: INFO: stderr: "" Mar 22 23:36:34.030: INFO: stdout: "true" Mar 22 23:36:34.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pwhqk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1319' Mar 22 23:36:34.133: INFO: stderr: "" Mar 22 23:36:34.133: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 23:36:34.133: INFO: validating pod update-demo-nautilus-pwhqk Mar 22 23:36:34.138: INFO: got data: { "image": "nautilus.jpg" } Mar 22 23:36:34.138: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 23:36:34.138: INFO: update-demo-nautilus-pwhqk is verified up and running STEP: using delete to clean up resources Mar 22 23:36:34.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1319' Mar 22 23:36:34.239: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 23:36:34.239: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 22 23:36:34.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1319' Mar 22 23:36:34.330: INFO: stderr: "No resources found in kubectl-1319 namespace.\n" Mar 22 23:36:34.330: INFO: stdout: "" Mar 22 23:36:34.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1319 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 22 23:36:34.408: INFO: stderr: "" Mar 22 23:36:34.408: INFO: stdout: "update-demo-nautilus-kwdl5\nupdate-demo-nautilus-pwhqk\n" Mar 22 23:36:34.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1319' Mar 22 23:36:35.004: INFO: stderr: "No resources found in kubectl-1319 namespace.\n" Mar 22 23:36:35.004: INFO: stdout: "" Mar 22 23:36:35.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1319 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 22 23:36:35.098: INFO: stderr: "" Mar 22 23:36:35.098: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:36:35.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1319" for this suite. • [SLOW TEST:9.741 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":4,"skipped":126,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:36:35.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 23:36:35.720: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 23:36:37.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720516995, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720516995, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720516995, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720516995, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 23:36:40.766: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:36:41.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7068" for this suite. STEP: Destroying namespace "webhook-7068-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.258 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":5,"skipped":146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:36:41.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 22 23:36:41.438: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:36:46.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1759" for this suite. • [SLOW TEST:5.421 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":6,"skipped":175,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:36:46.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4230 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 22 23:36:46.842: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 22 23:36:46.916: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 23:36:48.920: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 23:36:50.920: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:36:52.920: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:36:54.919: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:36:56.920: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:36:58.920: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:37:00.920: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 22 23:37:00.925: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 22 23:37:02.929: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 22 23:37:04.929: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 22 23:37:06.929: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 22 23:37:08.929: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 22 23:37:13.003: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.128:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4230 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:13.003: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:13.042097 7 log.go:172] (0xc002d3e790) (0xc001268820) Create stream I0322 23:37:13.042126 7 log.go:172] (0xc002d3e790) (0xc001268820) Stream added, broadcasting: 1 I0322 23:37:13.045635 7 log.go:172] (0xc002d3e790) Reply frame received for 1 I0322 23:37:13.045695 7 log.go:172] (0xc002d3e790) (0xc002a96640) Create stream I0322 23:37:13.045721 7 log.go:172] (0xc002d3e790) (0xc002a96640) Stream added, broadcasting: 3 I0322 23:37:13.046955 7 log.go:172] (0xc002d3e790) Reply frame received for 3 I0322 23:37:13.047012 7 log.go:172] (0xc002d3e790) (0xc000fe1180) Create stream I0322 23:37:13.047033 7 log.go:172] (0xc002d3e790) (0xc000fe1180) Stream added, broadcasting: 5 I0322 23:37:13.048128 7 log.go:172] (0xc002d3e790) Reply frame received for 5 I0322 23:37:13.135478 7 log.go:172] (0xc002d3e790) Data frame received for 3 I0322 23:37:13.135503 7 log.go:172] (0xc002a96640) (3) Data frame handling I0322 23:37:13.135516 7 log.go:172] (0xc002a96640) (3) Data frame sent I0322 23:37:13.136122 7 log.go:172] (0xc002d3e790) Data frame received for 5 I0322 23:37:13.136133 7 log.go:172] (0xc000fe1180) (5) Data frame handling I0322 23:37:13.136154 7 log.go:172] (0xc002d3e790) Data frame received for 3 I0322 23:37:13.136168 7 log.go:172] (0xc002a96640) (3) Data frame handling I0322 23:37:13.137935 7 log.go:172] (0xc002d3e790) Data frame received for 1 I0322 23:37:13.137952 7 log.go:172] (0xc001268820) (1) Data frame handling I0322 23:37:13.137960 7 log.go:172] (0xc001268820) (1) Data frame sent I0322 23:37:13.137969 7 log.go:172] (0xc002d3e790) (0xc001268820) Stream removed, broadcasting: 1 I0322 23:37:13.137998 7 log.go:172] (0xc002d3e790) Go away received I0322 23:37:13.138293 7 log.go:172] (0xc002d3e790) (0xc001268820) Stream removed, broadcasting: 1 I0322 23:37:13.138308 7 log.go:172] (0xc002d3e790) (0xc002a96640) Stream removed, broadcasting: 3 I0322 23:37:13.138320 7 log.go:172] (0xc002d3e790) (0xc000fe1180) Stream removed, broadcasting: 5 Mar 22 23:37:13.138: INFO: Found all expected endpoints: [netserver-0] Mar 22 23:37:13.141: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.254:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4230 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:13.141: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:13.164195 7 log.go:172] (0xc002d3edc0) (0xc001268e60) Create stream I0322 23:37:13.164222 7 log.go:172] (0xc002d3edc0) (0xc001268e60) Stream added, broadcasting: 1 I0322 23:37:13.166691 7 log.go:172] (0xc002d3edc0) Reply frame received for 1 I0322 23:37:13.166731 7 log.go:172] (0xc002d3edc0) (0xc001268fa0) Create stream I0322 23:37:13.166751 7 log.go:172] (0xc002d3edc0) (0xc001268fa0) Stream added, broadcasting: 3 I0322 23:37:13.167472 7 log.go:172] (0xc002d3edc0) Reply frame received for 3 I0322 23:37:13.167505 7 log.go:172] (0xc002d3edc0) (0xc000fe1360) Create stream I0322 23:37:13.167515 7 log.go:172] (0xc002d3edc0) (0xc000fe1360) Stream added, broadcasting: 5 I0322 23:37:13.168435 7 log.go:172] (0xc002d3edc0) Reply frame received for 5 I0322 23:37:13.224302 7 log.go:172] (0xc002d3edc0) Data frame received for 3 I0322 23:37:13.224367 7 log.go:172] (0xc001268fa0) (3) Data frame handling I0322 23:37:13.224424 7 log.go:172] (0xc001268fa0) (3) Data frame sent I0322 23:37:13.224455 7 log.go:172] (0xc002d3edc0) Data frame received for 3 I0322 23:37:13.224466 7 log.go:172] (0xc001268fa0) (3) Data frame handling I0322 23:37:13.224609 7 log.go:172] (0xc002d3edc0) Data frame received for 5 I0322 23:37:13.224648 7 log.go:172] (0xc000fe1360) (5) Data frame handling I0322 23:37:13.226263 7 log.go:172] (0xc002d3edc0) Data frame received for 1 I0322 23:37:13.226300 7 log.go:172] (0xc001268e60) (1) Data frame handling I0322 23:37:13.226337 7 log.go:172] (0xc001268e60) (1) Data frame sent I0322 23:37:13.226366 7 log.go:172] (0xc002d3edc0) (0xc001268e60) Stream removed, broadcasting: 1 I0322 23:37:13.226395 7 log.go:172] (0xc002d3edc0) Go away received I0322 23:37:13.226479 7 log.go:172] (0xc002d3edc0) (0xc001268e60) Stream removed, broadcasting: 1 I0322 23:37:13.226518 7 log.go:172] (0xc002d3edc0) (0xc001268fa0) Stream removed, broadcasting: 3 I0322 23:37:13.226546 7 log.go:172] (0xc002d3edc0) (0xc000fe1360) Stream removed, broadcasting: 5 Mar 22 23:37:13.226: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:37:13.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4230" for this suite. • [SLOW TEST:26.447 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":180,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:37:13.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 22 23:37:13.327: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f0de254-7e3f-4d09-93f4-0c2b9730ddf7" in namespace "downward-api-7457" to be "Succeeded or Failed" Mar 22 23:37:13.334: INFO: Pod "downwardapi-volume-6f0de254-7e3f-4d09-93f4-0c2b9730ddf7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.483097ms Mar 22 23:37:15.343: INFO: Pod "downwardapi-volume-6f0de254-7e3f-4d09-93f4-0c2b9730ddf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015758809s Mar 22 23:37:17.347: INFO: Pod "downwardapi-volume-6f0de254-7e3f-4d09-93f4-0c2b9730ddf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019996653s STEP: Saw pod success Mar 22 23:37:17.347: INFO: Pod "downwardapi-volume-6f0de254-7e3f-4d09-93f4-0c2b9730ddf7" satisfied condition "Succeeded or Failed" Mar 22 23:37:17.351: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6f0de254-7e3f-4d09-93f4-0c2b9730ddf7 container client-container: STEP: delete the pod Mar 22 23:37:17.398: INFO: Waiting for pod downwardapi-volume-6f0de254-7e3f-4d09-93f4-0c2b9730ddf7 to disappear Mar 22 23:37:17.411: INFO: Pod downwardapi-volume-6f0de254-7e3f-4d09-93f4-0c2b9730ddf7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:37:17.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7457" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:37:17.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-1262/configmap-test-2ce485fb-53e1-4543-88d9-b63718649e8f STEP: Creating a pod to test consume configMaps Mar 22 23:37:17.490: INFO: Waiting up to 5m0s for pod "pod-configmaps-91e85e6a-8c5a-42e3-ba49-ebbe4d44cdbc" in namespace "configmap-1262" to be "Succeeded or Failed" Mar 22 23:37:17.494: INFO: Pod "pod-configmaps-91e85e6a-8c5a-42e3-ba49-ebbe4d44cdbc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.964175ms Mar 22 23:37:19.498: INFO: Pod "pod-configmaps-91e85e6a-8c5a-42e3-ba49-ebbe4d44cdbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008109056s Mar 22 23:37:21.503: INFO: Pod "pod-configmaps-91e85e6a-8c5a-42e3-ba49-ebbe4d44cdbc": Phase="Running", Reason="", readiness=true. Elapsed: 4.012385875s Mar 22 23:37:23.507: INFO: Pod "pod-configmaps-91e85e6a-8c5a-42e3-ba49-ebbe4d44cdbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016833367s STEP: Saw pod success Mar 22 23:37:23.507: INFO: Pod "pod-configmaps-91e85e6a-8c5a-42e3-ba49-ebbe4d44cdbc" satisfied condition "Succeeded or Failed" Mar 22 23:37:23.510: INFO: Trying to get logs from node latest-worker pod pod-configmaps-91e85e6a-8c5a-42e3-ba49-ebbe4d44cdbc container env-test: STEP: delete the pod Mar 22 23:37:23.534: INFO: Waiting for pod pod-configmaps-91e85e6a-8c5a-42e3-ba49-ebbe4d44cdbc to disappear Mar 22 23:37:23.543: INFO: Pod pod-configmaps-91e85e6a-8c5a-42e3-ba49-ebbe4d44cdbc no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:37:23.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1262" for this suite. • [SLOW TEST:6.131 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":235,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:37:23.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 23:37:24.455: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 23:37:26.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517044, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517044, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517044, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517044, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 23:37:29.511: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:37:29.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:37:30.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-812" for this suite. STEP: Destroying namespace "webhook-812-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.311 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":10,"skipped":237,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:37:30.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 22 23:37:30.902: INFO: Waiting up to 5m0s for pod "pod-c2d10ac0-4ea2-4557-ab76-840174f77fae" in namespace "emptydir-9081" to be "Succeeded or Failed" Mar 22 23:37:30.914: INFO: Pod "pod-c2d10ac0-4ea2-4557-ab76-840174f77fae": Phase="Pending", Reason="", readiness=false. Elapsed: 12.18899ms Mar 22 23:37:32.918: INFO: Pod "pod-c2d10ac0-4ea2-4557-ab76-840174f77fae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015861218s Mar 22 23:37:34.922: INFO: Pod "pod-c2d10ac0-4ea2-4557-ab76-840174f77fae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020309666s STEP: Saw pod success Mar 22 23:37:34.922: INFO: Pod "pod-c2d10ac0-4ea2-4557-ab76-840174f77fae" satisfied condition "Succeeded or Failed" Mar 22 23:37:34.926: INFO: Trying to get logs from node latest-worker pod pod-c2d10ac0-4ea2-4557-ab76-840174f77fae container test-container: STEP: delete the pod Mar 22 23:37:35.070: INFO: Waiting for pod pod-c2d10ac0-4ea2-4557-ab76-840174f77fae to disappear Mar 22 23:37:35.082: INFO: Pod pod-c2d10ac0-4ea2-4557-ab76-840174f77fae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:37:35.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9081" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":243,"failed":0} SSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:37:35.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 22 23:37:45.180: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4387 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:45.180: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:45.219331 7 log.go:172] (0xc002467a20) (0xc001227d60) Create stream I0322 23:37:45.219361 7 log.go:172] (0xc002467a20) (0xc001227d60) Stream added, broadcasting: 1 I0322 23:37:45.221937 7 log.go:172] (0xc002467a20) Reply frame received for 1 I0322 23:37:45.221984 7 log.go:172] (0xc002467a20) (0xc00111e000) Create stream I0322 23:37:45.221999 7 log.go:172] (0xc002467a20) (0xc00111e000) Stream added, broadcasting: 3 I0322 23:37:45.223084 7 log.go:172] (0xc002467a20) Reply frame received for 3 I0322 23:37:45.223109 7 log.go:172] (0xc002467a20) (0xc0012b4780) Create stream I0322 23:37:45.223118 7 log.go:172] (0xc002467a20) (0xc0012b4780) Stream added, broadcasting: 5 I0322 23:37:45.224064 7 log.go:172] (0xc002467a20) Reply frame received for 5 I0322 23:37:45.288899 7 log.go:172] (0xc002467a20) Data frame received for 5 I0322 23:37:45.288929 7 log.go:172] (0xc0012b4780) (5) Data frame handling I0322 23:37:45.288954 7 log.go:172] (0xc002467a20) Data frame received for 3 I0322 23:37:45.288987 7 log.go:172] (0xc00111e000) (3) Data frame handling I0322 23:37:45.289015 7 log.go:172] (0xc00111e000) (3) Data frame sent I0322 23:37:45.289029 7 log.go:172] (0xc002467a20) Data frame received for 3 I0322 23:37:45.289045 7 log.go:172] (0xc00111e000) (3) Data frame handling I0322 23:37:45.290720 7 log.go:172] (0xc002467a20) Data frame received for 1 I0322 23:37:45.290759 7 log.go:172] (0xc001227d60) (1) Data frame handling I0322 23:37:45.290793 7 log.go:172] (0xc001227d60) (1) Data frame sent I0322 23:37:45.290823 7 log.go:172] (0xc002467a20) (0xc001227d60) Stream removed, broadcasting: 1 I0322 23:37:45.290903 7 log.go:172] (0xc002467a20) (0xc001227d60) Stream removed, broadcasting: 1 I0322 23:37:45.290938 7 log.go:172] (0xc002467a20) (0xc00111e000) Stream removed, broadcasting: 3 I0322 23:37:45.290960 7 log.go:172] (0xc002467a20) (0xc0012b4780) Stream removed, broadcasting: 5 Mar 22 23:37:45.290: INFO: Exec stderr: "" I0322 23:37:45.291016 7 log.go:172] (0xc002467a20) Go away received Mar 22 23:37:45.291: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4387 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:45.291: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:45.322576 7 log.go:172] (0xc002517ce0) (0xc0012b4fa0) Create stream I0322 23:37:45.322601 7 log.go:172] (0xc002517ce0) (0xc0012b4fa0) Stream added, broadcasting: 1 I0322 23:37:45.324439 7 log.go:172] (0xc002517ce0) Reply frame received for 1 I0322 23:37:45.324472 7 log.go:172] (0xc002517ce0) (0xc00111e0a0) Create stream I0322 23:37:45.324484 7 log.go:172] (0xc002517ce0) (0xc00111e0a0) Stream added, broadcasting: 3 I0322 23:37:45.325569 7 log.go:172] (0xc002517ce0) Reply frame received for 3 I0322 23:37:45.325595 7 log.go:172] (0xc002517ce0) (0xc00117ba40) Create stream I0322 23:37:45.325605 7 log.go:172] (0xc002517ce0) (0xc00117ba40) Stream added, broadcasting: 5 I0322 23:37:45.326304 7 log.go:172] (0xc002517ce0) Reply frame received for 5 I0322 23:37:45.383255 7 log.go:172] (0xc002517ce0) Data frame received for 3 I0322 23:37:45.383305 7 log.go:172] (0xc00111e0a0) (3) Data frame handling I0322 23:37:45.383340 7 log.go:172] (0xc00111e0a0) (3) Data frame sent I0322 23:37:45.383414 7 log.go:172] (0xc002517ce0) Data frame received for 5 I0322 23:37:45.383483 7 log.go:172] (0xc00117ba40) (5) Data frame handling I0322 23:37:45.383527 7 log.go:172] (0xc002517ce0) Data frame received for 3 I0322 23:37:45.383545 7 log.go:172] (0xc00111e0a0) (3) Data frame handling I0322 23:37:45.385073 7 log.go:172] (0xc002517ce0) Data frame received for 1 I0322 23:37:45.385106 7 log.go:172] (0xc0012b4fa0) (1) Data frame handling I0322 23:37:45.385308 7 log.go:172] (0xc0012b4fa0) (1) Data frame sent I0322 23:37:45.385332 7 log.go:172] (0xc002517ce0) (0xc0012b4fa0) Stream removed, broadcasting: 1 I0322 23:37:45.385426 7 log.go:172] (0xc002517ce0) Go away received I0322 23:37:45.385484 7 log.go:172] (0xc002517ce0) (0xc0012b4fa0) Stream removed, broadcasting: 1 I0322 23:37:45.385521 7 log.go:172] (0xc002517ce0) (0xc00111e0a0) Stream removed, broadcasting: 3 I0322 23:37:45.385544 7 log.go:172] (0xc002517ce0) (0xc00117ba40) Stream removed, broadcasting: 5 Mar 22 23:37:45.385: INFO: Exec stderr: "" Mar 22 23:37:45.385: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4387 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:45.385: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:45.422147 7 log.go:172] (0xc002d3e4d0) (0xc00111e6e0) Create stream I0322 23:37:45.422177 7 log.go:172] (0xc002d3e4d0) (0xc00111e6e0) Stream added, broadcasting: 1 I0322 23:37:45.424497 7 log.go:172] (0xc002d3e4d0) Reply frame received for 1 I0322 23:37:45.424524 7 log.go:172] (0xc002d3e4d0) (0xc001269c20) Create stream I0322 23:37:45.424533 7 log.go:172] (0xc002d3e4d0) (0xc001269c20) Stream added, broadcasting: 3 I0322 23:37:45.425910 7 log.go:172] (0xc002d3e4d0) Reply frame received for 3 I0322 23:37:45.425946 7 log.go:172] (0xc002d3e4d0) (0xc0012b50e0) Create stream I0322 23:37:45.425962 7 log.go:172] (0xc002d3e4d0) (0xc0012b50e0) Stream added, broadcasting: 5 I0322 23:37:45.426910 7 log.go:172] (0xc002d3e4d0) Reply frame received for 5 I0322 23:37:45.491896 7 log.go:172] (0xc002d3e4d0) Data frame received for 5 I0322 23:37:45.491931 7 log.go:172] (0xc0012b50e0) (5) Data frame handling I0322 23:37:45.492607 7 log.go:172] (0xc002d3e4d0) Data frame received for 3 I0322 23:37:45.492648 7 log.go:172] (0xc001269c20) (3) Data frame handling I0322 23:37:45.492677 7 log.go:172] (0xc001269c20) (3) Data frame sent I0322 23:37:45.492697 7 log.go:172] (0xc002d3e4d0) Data frame received for 3 I0322 23:37:45.492710 7 log.go:172] (0xc001269c20) (3) Data frame handling I0322 23:37:45.499122 7 log.go:172] (0xc002d3e4d0) Data frame received for 1 I0322 23:37:45.499154 7 log.go:172] (0xc00111e6e0) (1) Data frame handling I0322 23:37:45.499183 7 log.go:172] (0xc00111e6e0) (1) Data frame sent I0322 23:37:45.499200 7 log.go:172] (0xc002d3e4d0) (0xc00111e6e0) Stream removed, broadcasting: 1 I0322 23:37:45.499217 7 log.go:172] (0xc002d3e4d0) Go away received I0322 23:37:45.499435 7 log.go:172] (0xc002d3e4d0) (0xc00111e6e0) Stream removed, broadcasting: 1 I0322 23:37:45.499447 7 log.go:172] (0xc002d3e4d0) (0xc001269c20) Stream removed, broadcasting: 3 I0322 23:37:45.499453 7 log.go:172] (0xc002d3e4d0) (0xc0012b50e0) Stream removed, broadcasting: 5 Mar 22 23:37:45.499: INFO: Exec stderr: "" Mar 22 23:37:45.499: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4387 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:45.499: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:45.523961 7 log.go:172] (0xc002694210) (0xc0012b5360) Create stream I0322 23:37:45.523991 7 log.go:172] (0xc002694210) (0xc0012b5360) Stream added, broadcasting: 1 I0322 23:37:45.525716 7 log.go:172] (0xc002694210) Reply frame received for 1 I0322 23:37:45.525768 7 log.go:172] (0xc002694210) (0xc001227e00) Create stream I0322 23:37:45.525790 7 log.go:172] (0xc002694210) (0xc001227e00) Stream added, broadcasting: 3 I0322 23:37:45.526663 7 log.go:172] (0xc002694210) Reply frame received for 3 I0322 23:37:45.526697 7 log.go:172] (0xc002694210) (0xc001269cc0) Create stream I0322 23:37:45.526711 7 log.go:172] (0xc002694210) (0xc001269cc0) Stream added, broadcasting: 5 I0322 23:37:45.527601 7 log.go:172] (0xc002694210) Reply frame received for 5 I0322 23:37:45.592448 7 log.go:172] (0xc002694210) Data frame received for 5 I0322 23:37:45.592490 7 log.go:172] (0xc001269cc0) (5) Data frame handling I0322 23:37:45.592527 7 log.go:172] (0xc002694210) Data frame received for 3 I0322 23:37:45.592550 7 log.go:172] (0xc001227e00) (3) Data frame handling I0322 23:37:45.592580 7 log.go:172] (0xc001227e00) (3) Data frame sent I0322 23:37:45.592603 7 log.go:172] (0xc002694210) Data frame received for 3 I0322 23:37:45.592623 7 log.go:172] (0xc001227e00) (3) Data frame handling I0322 23:37:45.593958 7 log.go:172] (0xc002694210) Data frame received for 1 I0322 23:37:45.593995 7 log.go:172] (0xc0012b5360) (1) Data frame handling I0322 23:37:45.594018 7 log.go:172] (0xc0012b5360) (1) Data frame sent I0322 23:37:45.594047 7 log.go:172] (0xc002694210) (0xc0012b5360) Stream removed, broadcasting: 1 I0322 23:37:45.594074 7 log.go:172] (0xc002694210) Go away received I0322 23:37:45.594167 7 log.go:172] (0xc002694210) (0xc0012b5360) Stream removed, broadcasting: 1 I0322 23:37:45.594204 7 log.go:172] (0xc002694210) (0xc001227e00) Stream removed, broadcasting: 3 I0322 23:37:45.594224 7 log.go:172] (0xc002694210) (0xc001269cc0) Stream removed, broadcasting: 5 Mar 22 23:37:45.594: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 22 23:37:45.594: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4387 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:45.594: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:45.630800 7 log.go:172] (0xc002694a50) (0xc0012b55e0) Create stream I0322 23:37:45.630826 7 log.go:172] (0xc002694a50) (0xc0012b55e0) Stream added, broadcasting: 1 I0322 23:37:45.632519 7 log.go:172] (0xc002694a50) Reply frame received for 1 I0322 23:37:45.632543 7 log.go:172] (0xc002694a50) (0xc001269d60) Create stream I0322 23:37:45.632556 7 log.go:172] (0xc002694a50) (0xc001269d60) Stream added, broadcasting: 3 I0322 23:37:45.633730 7 log.go:172] (0xc002694a50) Reply frame received for 3 I0322 23:37:45.633784 7 log.go:172] (0xc002694a50) (0xc000fe0140) Create stream I0322 23:37:45.633797 7 log.go:172] (0xc002694a50) (0xc000fe0140) Stream added, broadcasting: 5 I0322 23:37:45.634661 7 log.go:172] (0xc002694a50) Reply frame received for 5 I0322 23:37:45.702472 7 log.go:172] (0xc002694a50) Data frame received for 3 I0322 23:37:45.702506 7 log.go:172] (0xc001269d60) (3) Data frame handling I0322 23:37:45.702531 7 log.go:172] (0xc001269d60) (3) Data frame sent I0322 23:37:45.702554 7 log.go:172] (0xc002694a50) Data frame received for 3 I0322 23:37:45.702574 7 log.go:172] (0xc001269d60) (3) Data frame handling I0322 23:37:45.702608 7 log.go:172] (0xc002694a50) Data frame received for 5 I0322 23:37:45.702630 7 log.go:172] (0xc000fe0140) (5) Data frame handling I0322 23:37:45.704311 7 log.go:172] (0xc002694a50) Data frame received for 1 I0322 23:37:45.704412 7 log.go:172] (0xc0012b55e0) (1) Data frame handling I0322 23:37:45.704450 7 log.go:172] (0xc0012b55e0) (1) Data frame sent I0322 23:37:45.704480 7 log.go:172] (0xc002694a50) (0xc0012b55e0) Stream removed, broadcasting: 1 I0322 23:37:45.704512 7 log.go:172] (0xc002694a50) Go away received I0322 23:37:45.704655 7 log.go:172] (0xc002694a50) (0xc0012b55e0) Stream removed, broadcasting: 1 I0322 23:37:45.704684 7 log.go:172] (0xc002694a50) (0xc001269d60) Stream removed, broadcasting: 3 I0322 23:37:45.704706 7 log.go:172] (0xc002694a50) (0xc000fe0140) Stream removed, broadcasting: 5 Mar 22 23:37:45.704: INFO: Exec stderr: "" Mar 22 23:37:45.704: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4387 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:45.704: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:45.734673 7 log.go:172] (0xc002695130) (0xc0012b5c20) Create stream I0322 23:37:45.734702 7 log.go:172] (0xc002695130) (0xc0012b5c20) Stream added, broadcasting: 1 I0322 23:37:45.736700 7 log.go:172] (0xc002695130) Reply frame received for 1 I0322 23:37:45.736749 7 log.go:172] (0xc002695130) (0xc0012b5d60) Create stream I0322 23:37:45.736769 7 log.go:172] (0xc002695130) (0xc0012b5d60) Stream added, broadcasting: 3 I0322 23:37:45.737883 7 log.go:172] (0xc002695130) Reply frame received for 3 I0322 23:37:45.737920 7 log.go:172] (0xc002695130) (0xc0012b5e00) Create stream I0322 23:37:45.737932 7 log.go:172] (0xc002695130) (0xc0012b5e00) Stream added, broadcasting: 5 I0322 23:37:45.738973 7 log.go:172] (0xc002695130) Reply frame received for 5 I0322 23:37:45.813926 7 log.go:172] (0xc002695130) Data frame received for 3 I0322 23:37:45.813971 7 log.go:172] (0xc0012b5d60) (3) Data frame handling I0322 23:37:45.813999 7 log.go:172] (0xc0012b5d60) (3) Data frame sent I0322 23:37:45.814028 7 log.go:172] (0xc002695130) Data frame received for 3 I0322 23:37:45.814052 7 log.go:172] (0xc0012b5d60) (3) Data frame handling I0322 23:37:45.814072 7 log.go:172] (0xc002695130) Data frame received for 5 I0322 23:37:45.814092 7 log.go:172] (0xc0012b5e00) (5) Data frame handling I0322 23:37:45.816095 7 log.go:172] (0xc002695130) Data frame received for 1 I0322 23:37:45.816120 7 log.go:172] (0xc0012b5c20) (1) Data frame handling I0322 23:37:45.816134 7 log.go:172] (0xc0012b5c20) (1) Data frame sent I0322 23:37:45.816155 7 log.go:172] (0xc002695130) (0xc0012b5c20) Stream removed, broadcasting: 1 I0322 23:37:45.816208 7 log.go:172] (0xc002695130) Go away received I0322 23:37:45.816239 7 log.go:172] (0xc002695130) (0xc0012b5c20) Stream removed, broadcasting: 1 I0322 23:37:45.816277 7 log.go:172] (0xc002695130) (0xc0012b5d60) Stream removed, broadcasting: 3 I0322 23:37:45.816287 7 log.go:172] (0xc002695130) (0xc0012b5e00) Stream removed, broadcasting: 5 Mar 22 23:37:45.816: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 22 23:37:45.816: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4387 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:45.816: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:45.845431 7 log.go:172] (0xc0024600b0) (0xc000fe0500) Create stream I0322 23:37:45.845458 7 log.go:172] (0xc0024600b0) (0xc000fe0500) Stream added, broadcasting: 1 I0322 23:37:45.847219 7 log.go:172] (0xc0024600b0) Reply frame received for 1 I0322 23:37:45.847259 7 log.go:172] (0xc0024600b0) (0xc000fe06e0) Create stream I0322 23:37:45.847271 7 log.go:172] (0xc0024600b0) (0xc000fe06e0) Stream added, broadcasting: 3 I0322 23:37:45.848069 7 log.go:172] (0xc0024600b0) Reply frame received for 3 I0322 23:37:45.848105 7 log.go:172] (0xc0024600b0) (0xc000d06280) Create stream I0322 23:37:45.848116 7 log.go:172] (0xc0024600b0) (0xc000d06280) Stream added, broadcasting: 5 I0322 23:37:45.848834 7 log.go:172] (0xc0024600b0) Reply frame received for 5 I0322 23:37:45.917400 7 log.go:172] (0xc0024600b0) Data frame received for 5 I0322 23:37:45.917437 7 log.go:172] (0xc000d06280) (5) Data frame handling I0322 23:37:45.917461 7 log.go:172] (0xc0024600b0) Data frame received for 3 I0322 23:37:45.917480 7 log.go:172] (0xc000fe06e0) (3) Data frame handling I0322 23:37:45.917492 7 log.go:172] (0xc000fe06e0) (3) Data frame sent I0322 23:37:45.917543 7 log.go:172] (0xc0024600b0) Data frame received for 3 I0322 23:37:45.917567 7 log.go:172] (0xc000fe06e0) (3) Data frame handling I0322 23:37:45.919229 7 log.go:172] (0xc0024600b0) Data frame received for 1 I0322 23:37:45.919255 7 log.go:172] (0xc000fe0500) (1) Data frame handling I0322 23:37:45.919272 7 log.go:172] (0xc000fe0500) (1) Data frame sent I0322 23:37:45.919296 7 log.go:172] (0xc0024600b0) (0xc000fe0500) Stream removed, broadcasting: 1 I0322 23:37:45.919349 7 log.go:172] (0xc0024600b0) Go away received I0322 23:37:45.919408 7 log.go:172] (0xc0024600b0) (0xc000fe0500) Stream removed, broadcasting: 1 I0322 23:37:45.919439 7 log.go:172] (0xc0024600b0) (0xc000fe06e0) Stream removed, broadcasting: 3 I0322 23:37:45.919460 7 log.go:172] (0xc0024600b0) (0xc000d06280) Stream removed, broadcasting: 5 Mar 22 23:37:45.919: INFO: Exec stderr: "" Mar 22 23:37:45.919: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4387 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:45.919: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:45.957219 7 log.go:172] (0xc0024606e0) (0xc000fe0d20) Create stream I0322 23:37:45.957253 7 log.go:172] (0xc0024606e0) (0xc000fe0d20) Stream added, broadcasting: 1 I0322 23:37:45.959351 7 log.go:172] (0xc0024606e0) Reply frame received for 1 I0322 23:37:45.959390 7 log.go:172] (0xc0024606e0) (0xc000fe0dc0) Create stream I0322 23:37:45.959408 7 log.go:172] (0xc0024606e0) (0xc000fe0dc0) Stream added, broadcasting: 3 I0322 23:37:45.960226 7 log.go:172] (0xc0024606e0) Reply frame received for 3 I0322 23:37:45.960259 7 log.go:172] (0xc0024606e0) (0xc00111ea00) Create stream I0322 23:37:45.960272 7 log.go:172] (0xc0024606e0) (0xc00111ea00) Stream added, broadcasting: 5 I0322 23:37:45.961472 7 log.go:172] (0xc0024606e0) Reply frame received for 5 I0322 23:37:46.029266 7 log.go:172] (0xc0024606e0) Data frame received for 5 I0322 23:37:46.029376 7 log.go:172] (0xc00111ea00) (5) Data frame handling I0322 23:37:46.029421 7 log.go:172] (0xc0024606e0) Data frame received for 3 I0322 23:37:46.029435 7 log.go:172] (0xc000fe0dc0) (3) Data frame handling I0322 23:37:46.029456 7 log.go:172] (0xc000fe0dc0) (3) Data frame sent I0322 23:37:46.029486 7 log.go:172] (0xc0024606e0) Data frame received for 3 I0322 23:37:46.029517 7 log.go:172] (0xc000fe0dc0) (3) Data frame handling I0322 23:37:46.031222 7 log.go:172] (0xc0024606e0) Data frame received for 1 I0322 23:37:46.031258 7 log.go:172] (0xc000fe0d20) (1) Data frame handling I0322 23:37:46.031317 7 log.go:172] (0xc000fe0d20) (1) Data frame sent I0322 23:37:46.031350 7 log.go:172] (0xc0024606e0) (0xc000fe0d20) Stream removed, broadcasting: 1 I0322 23:37:46.031390 7 log.go:172] (0xc0024606e0) Go away received I0322 23:37:46.031513 7 log.go:172] (0xc0024606e0) (0xc000fe0d20) Stream removed, broadcasting: 1 I0322 23:37:46.031556 7 log.go:172] (0xc0024606e0) (0xc000fe0dc0) Stream removed, broadcasting: 3 I0322 23:37:46.031583 7 log.go:172] (0xc0024606e0) (0xc00111ea00) Stream removed, broadcasting: 5 Mar 22 23:37:46.031: INFO: Exec stderr: "" Mar 22 23:37:46.031: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4387 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:46.031: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:46.067967 7 log.go:172] (0xc002d3edc0) (0xc00111f0e0) Create stream I0322 23:37:46.068005 7 log.go:172] (0xc002d3edc0) (0xc00111f0e0) Stream added, broadcasting: 1 I0322 23:37:46.070521 7 log.go:172] (0xc002d3edc0) Reply frame received for 1 I0322 23:37:46.070563 7 log.go:172] (0xc002d3edc0) (0xc00117bae0) Create stream I0322 23:37:46.070581 7 log.go:172] (0xc002d3edc0) (0xc00117bae0) Stream added, broadcasting: 3 I0322 23:37:46.071594 7 log.go:172] (0xc002d3edc0) Reply frame received for 3 I0322 23:37:46.071626 7 log.go:172] (0xc002d3edc0) (0xc000d063c0) Create stream I0322 23:37:46.071637 7 log.go:172] (0xc002d3edc0) (0xc000d063c0) Stream added, broadcasting: 5 I0322 23:37:46.072543 7 log.go:172] (0xc002d3edc0) Reply frame received for 5 I0322 23:37:46.140845 7 log.go:172] (0xc002d3edc0) Data frame received for 5 I0322 23:37:46.140877 7 log.go:172] (0xc000d063c0) (5) Data frame handling I0322 23:37:46.140913 7 log.go:172] (0xc002d3edc0) Data frame received for 3 I0322 23:37:46.140938 7 log.go:172] (0xc00117bae0) (3) Data frame handling I0322 23:37:46.140954 7 log.go:172] (0xc00117bae0) (3) Data frame sent I0322 23:37:46.140968 7 log.go:172] (0xc002d3edc0) Data frame received for 3 I0322 23:37:46.140981 7 log.go:172] (0xc00117bae0) (3) Data frame handling I0322 23:37:46.142957 7 log.go:172] (0xc002d3edc0) Data frame received for 1 I0322 23:37:46.142978 7 log.go:172] (0xc00111f0e0) (1) Data frame handling I0322 23:37:46.143004 7 log.go:172] (0xc00111f0e0) (1) Data frame sent I0322 23:37:46.143104 7 log.go:172] (0xc002d3edc0) (0xc00111f0e0) Stream removed, broadcasting: 1 I0322 23:37:46.143129 7 log.go:172] (0xc002d3edc0) Go away received I0322 23:37:46.143221 7 log.go:172] (0xc002d3edc0) (0xc00111f0e0) Stream removed, broadcasting: 1 I0322 23:37:46.143357 7 log.go:172] (0xc002d3edc0) (0xc00117bae0) Stream removed, broadcasting: 3 I0322 23:37:46.143416 7 log.go:172] (0xc002d3edc0) (0xc000d063c0) Stream removed, broadcasting: 5 Mar 22 23:37:46.143: INFO: Exec stderr: "" Mar 22 23:37:46.143: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4387 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:37:46.143: INFO: >>> kubeConfig: /root/.kube/config I0322 23:37:46.176331 7 log.go:172] (0xc0037ec580) (0xc000b9c3c0) Create stream I0322 23:37:46.176357 7 log.go:172] (0xc0037ec580) (0xc000b9c3c0) Stream added, broadcasting: 1 I0322 23:37:46.179119 7 log.go:172] (0xc0037ec580) Reply frame received for 1 I0322 23:37:46.179157 7 log.go:172] (0xc0037ec580) (0xc00111f180) Create stream I0322 23:37:46.179172 7 log.go:172] (0xc0037ec580) (0xc00111f180) Stream added, broadcasting: 3 I0322 23:37:46.180190 7 log.go:172] (0xc0037ec580) Reply frame received for 3 I0322 23:37:46.180230 7 log.go:172] (0xc0037ec580) (0xc000d066e0) Create stream I0322 23:37:46.180248 7 log.go:172] (0xc0037ec580) (0xc000d066e0) Stream added, broadcasting: 5 I0322 23:37:46.181495 7 log.go:172] (0xc0037ec580) Reply frame received for 5 I0322 23:37:46.235752 7 log.go:172] (0xc0037ec580) Data frame received for 5 I0322 23:37:46.235794 7 log.go:172] (0xc000d066e0) (5) Data frame handling I0322 23:37:46.235826 7 log.go:172] (0xc0037ec580) Data frame received for 3 I0322 23:37:46.235842 7 log.go:172] (0xc00111f180) (3) Data frame handling I0322 23:37:46.235861 7 log.go:172] (0xc00111f180) (3) Data frame sent I0322 23:37:46.235873 7 log.go:172] (0xc0037ec580) Data frame received for 3 I0322 23:37:46.235887 7 log.go:172] (0xc00111f180) (3) Data frame handling I0322 23:37:46.237316 7 log.go:172] (0xc0037ec580) Data frame received for 1 I0322 23:37:46.237361 7 log.go:172] (0xc000b9c3c0) (1) Data frame handling I0322 23:37:46.237398 7 log.go:172] (0xc000b9c3c0) (1) Data frame sent I0322 23:37:46.237443 7 log.go:172] (0xc0037ec580) (0xc000b9c3c0) Stream removed, broadcasting: 1 I0322 23:37:46.237474 7 log.go:172] (0xc0037ec580) Go away received I0322 23:37:46.237574 7 log.go:172] (0xc0037ec580) (0xc000b9c3c0) Stream removed, broadcasting: 1 I0322 23:37:46.237653 7 log.go:172] (0xc0037ec580) (0xc00111f180) Stream removed, broadcasting: 3 I0322 23:37:46.237715 7 log.go:172] (0xc0037ec580) (0xc000d066e0) Stream removed, broadcasting: 5 Mar 22 23:37:46.237: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:37:46.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4387" for this suite. • [SLOW TEST:11.157 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":249,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:37:46.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 22 23:37:46.319: INFO: Waiting up to 5m0s for pod "pod-5f6ceb04-a2d6-47ef-80e2-c1de5a013938" in namespace "emptydir-3131" to be "Succeeded or Failed" Mar 22 23:37:46.348: INFO: Pod "pod-5f6ceb04-a2d6-47ef-80e2-c1de5a013938": Phase="Pending", Reason="", readiness=false. Elapsed: 29.268264ms Mar 22 23:37:48.352: INFO: Pod "pod-5f6ceb04-a2d6-47ef-80e2-c1de5a013938": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03307427s Mar 22 23:37:50.356: INFO: Pod "pod-5f6ceb04-a2d6-47ef-80e2-c1de5a013938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036672639s STEP: Saw pod success Mar 22 23:37:50.356: INFO: Pod "pod-5f6ceb04-a2d6-47ef-80e2-c1de5a013938" satisfied condition "Succeeded or Failed" Mar 22 23:37:50.359: INFO: Trying to get logs from node latest-worker2 pod pod-5f6ceb04-a2d6-47ef-80e2-c1de5a013938 container test-container: STEP: delete the pod Mar 22 23:37:50.434: INFO: Waiting for pod pod-5f6ceb04-a2d6-47ef-80e2-c1de5a013938 to disappear Mar 22 23:37:50.437: INFO: Pod pod-5f6ceb04-a2d6-47ef-80e2-c1de5a013938 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:37:50.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3131" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":257,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:37:50.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-d30ae2a0-8377-4af4-8383-ed37daa9f892 STEP: Creating a pod to test consume secrets Mar 22 23:37:50.511: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f293e040-abb0-4cab-bab8-d1979eed7957" in namespace "projected-5942" to be "Succeeded or Failed" Mar 22 23:37:50.515: INFO: Pod "pod-projected-secrets-f293e040-abb0-4cab-bab8-d1979eed7957": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07778ms Mar 22 23:37:52.518: INFO: Pod "pod-projected-secrets-f293e040-abb0-4cab-bab8-d1979eed7957": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007914816s Mar 22 23:37:54.522: INFO: Pod "pod-projected-secrets-f293e040-abb0-4cab-bab8-d1979eed7957": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01186578s STEP: Saw pod success Mar 22 23:37:54.522: INFO: Pod "pod-projected-secrets-f293e040-abb0-4cab-bab8-d1979eed7957" satisfied condition "Succeeded or Failed" Mar 22 23:37:54.525: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-f293e040-abb0-4cab-bab8-d1979eed7957 container projected-secret-volume-test: STEP: delete the pod Mar 22 23:37:54.545: INFO: Waiting for pod pod-projected-secrets-f293e040-abb0-4cab-bab8-d1979eed7957 to disappear Mar 22 23:37:54.550: INFO: Pod pod-projected-secrets-f293e040-abb0-4cab-bab8-d1979eed7957 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:37:54.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5942" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":263,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:37:54.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1440.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1440.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1440.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1440.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1440.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1440.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1440.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1440.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1440.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1440.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 142.8.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.8.142_udp@PTR;check="$$(dig +tcp +noall +answer +search 142.8.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.8.142_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1440.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1440.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1440.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1440.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1440.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1440.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1440.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1440.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1440.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1440.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1440.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 142.8.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.8.142_udp@PTR;check="$$(dig +tcp +noall +answer +search 142.8.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.8.142_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 23:37:58.767: INFO: Unable to read wheezy_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:37:58.770: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:37:58.772: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:37:58.775: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:37:58.801: INFO: Unable to read jessie_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:37:58.803: INFO: Unable to read jessie_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:37:58.805: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:37:58.807: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:37:58.821: INFO: Lookups using dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07 failed for: [wheezy_udp@dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_udp@dns-test-service.dns-1440.svc.cluster.local jessie_tcp@dns-test-service.dns-1440.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local] Mar 22 23:38:03.825: INFO: Unable to read wheezy_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:03.832: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:03.835: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:03.839: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:03.857: INFO: Unable to read jessie_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:03.860: INFO: Unable to read jessie_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:03.862: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:03.865: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:03.883: INFO: Lookups using dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07 failed for: [wheezy_udp@dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_udp@dns-test-service.dns-1440.svc.cluster.local jessie_tcp@dns-test-service.dns-1440.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local] Mar 22 23:38:08.826: INFO: Unable to read wheezy_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:08.830: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:08.833: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:08.837: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:08.860: INFO: Unable to read jessie_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:08.863: INFO: Unable to read jessie_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:08.867: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:08.870: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:08.897: INFO: Lookups using dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07 failed for: [wheezy_udp@dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_udp@dns-test-service.dns-1440.svc.cluster.local jessie_tcp@dns-test-service.dns-1440.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local] Mar 22 23:38:13.826: INFO: Unable to read wheezy_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:13.829: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:13.832: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:13.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:13.857: INFO: Unable to read jessie_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:13.860: INFO: Unable to read jessie_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:13.863: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:13.866: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:13.884: INFO: Lookups using dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07 failed for: [wheezy_udp@dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_udp@dns-test-service.dns-1440.svc.cluster.local jessie_tcp@dns-test-service.dns-1440.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local] Mar 22 23:38:18.826: INFO: Unable to read wheezy_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:18.829: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:18.833: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:18.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:18.861: INFO: Unable to read jessie_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:18.863: INFO: Unable to read jessie_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:18.866: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:18.868: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:18.884: INFO: Lookups using dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07 failed for: [wheezy_udp@dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_udp@dns-test-service.dns-1440.svc.cluster.local jessie_tcp@dns-test-service.dns-1440.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local] Mar 22 23:38:23.825: INFO: Unable to read wheezy_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:23.828: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:23.831: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:23.834: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:23.947: INFO: Unable to read jessie_udp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:23.950: INFO: Unable to read jessie_tcp@dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:23.953: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:23.957: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local from pod dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07: the server could not find the requested resource (get pods dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07) Mar 22 23:38:23.974: INFO: Lookups using dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07 failed for: [wheezy_udp@dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@dns-test-service.dns-1440.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_udp@dns-test-service.dns-1440.svc.cluster.local jessie_tcp@dns-test-service.dns-1440.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1440.svc.cluster.local] Mar 22 23:38:28.890: INFO: DNS probes using dns-1440/dns-test-05e8a22c-3c4f-4cd4-9acb-66bbdbe41f07 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:38:29.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1440" for this suite. • [SLOW TEST:34.846 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":15,"skipped":274,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:38:29.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:38:34.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5022" for this suite. • [SLOW TEST:5.148 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":16,"skipped":277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:38:34.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7296.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7296.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7296.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7296.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7296.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7296.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 23:38:40.718: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:40.721: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:40.724: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:40.727: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:40.734: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:40.736: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:40.739: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:40.742: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:40.747: INFO: Lookups using dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local] Mar 22 23:38:45.752: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:45.756: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:45.759: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:45.763: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:45.773: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:45.776: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:45.779: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:45.783: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:45.792: INFO: Lookups using dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local] Mar 22 23:38:50.751: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:50.754: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:50.758: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:50.761: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:50.769: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:50.773: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:50.776: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:50.779: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:50.786: INFO: Lookups using dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local] Mar 22 23:38:55.751: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:55.754: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:55.758: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:55.760: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:55.787: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:55.790: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:55.793: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:55.796: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:38:55.802: INFO: Lookups using dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local] Mar 22 23:39:00.794: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:00.797: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:00.800: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:00.804: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:00.813: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:00.816: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:00.819: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:00.822: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:00.827: INFO: Lookups using dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local] Mar 22 23:39:05.752: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:05.756: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:05.760: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:05.767: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:05.797: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:05.800: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:05.802: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:05.805: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local from pod dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1: the server could not find the requested resource (get pods dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1) Mar 22 23:39:05.812: INFO: Lookups using dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7296.svc.cluster.local jessie_udp@dns-test-service-2.dns-7296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7296.svc.cluster.local] Mar 22 23:39:10.821: INFO: DNS probes using dns-7296/dns-test-f0773de0-9bce-40b3-ab67-74a651bd02e1 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:39:10.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7296" for this suite. • [SLOW TEST:36.394 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":17,"skipped":318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:39:10.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-f432bc96-3611-4505-97ee-d492636340f9 STEP: Creating a pod to test consume secrets Mar 22 23:39:11.416: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f945705d-6c4f-4444-b949-6afb8a52783a" in namespace "projected-2007" to be "Succeeded or Failed" Mar 22 23:39:11.426: INFO: Pod "pod-projected-secrets-f945705d-6c4f-4444-b949-6afb8a52783a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.390827ms Mar 22 23:39:13.429: INFO: Pod "pod-projected-secrets-f945705d-6c4f-4444-b949-6afb8a52783a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013464594s Mar 22 23:39:15.434: INFO: Pod "pod-projected-secrets-f945705d-6c4f-4444-b949-6afb8a52783a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017965148s STEP: Saw pod success Mar 22 23:39:15.434: INFO: Pod "pod-projected-secrets-f945705d-6c4f-4444-b949-6afb8a52783a" satisfied condition "Succeeded or Failed" Mar 22 23:39:15.437: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-f945705d-6c4f-4444-b949-6afb8a52783a container secret-volume-test: STEP: delete the pod Mar 22 23:39:15.481: INFO: Waiting for pod pod-projected-secrets-f945705d-6c4f-4444-b949-6afb8a52783a to disappear Mar 22 23:39:15.492: INFO: Pod pod-projected-secrets-f945705d-6c4f-4444-b949-6afb8a52783a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:39:15.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2007" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":362,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:39:15.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-aa6be598-dc44-4c29-83e3-9293aa95ac23 STEP: Creating a pod to test consume secrets Mar 22 23:39:15.560: INFO: Waiting up to 5m0s for pod "pod-secrets-26f99d92-645b-46b3-8da8-2de638f286c2" in namespace "secrets-4251" to be "Succeeded or Failed" Mar 22 23:39:15.578: INFO: Pod "pod-secrets-26f99d92-645b-46b3-8da8-2de638f286c2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.832368ms Mar 22 23:39:17.581: INFO: Pod "pod-secrets-26f99d92-645b-46b3-8da8-2de638f286c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021036555s Mar 22 23:39:19.585: INFO: Pod "pod-secrets-26f99d92-645b-46b3-8da8-2de638f286c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025068799s STEP: Saw pod success Mar 22 23:39:19.585: INFO: Pod "pod-secrets-26f99d92-645b-46b3-8da8-2de638f286c2" satisfied condition "Succeeded or Failed" Mar 22 23:39:19.588: INFO: Trying to get logs from node latest-worker pod pod-secrets-26f99d92-645b-46b3-8da8-2de638f286c2 container secret-volume-test: STEP: delete the pod Mar 22 23:39:19.618: INFO: Waiting for pod pod-secrets-26f99d92-645b-46b3-8da8-2de638f286c2 to disappear Mar 22 23:39:19.630: INFO: Pod pod-secrets-26f99d92-645b-46b3-8da8-2de638f286c2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:39:19.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4251" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":380,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:39:19.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 22 23:39:19.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d09c12c2-1d7e-4133-b5aa-c16908887063" in namespace "downward-api-9519" to be "Succeeded or Failed" Mar 22 23:39:19.696: INFO: Pod "downwardapi-volume-d09c12c2-1d7e-4133-b5aa-c16908887063": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503064ms Mar 22 23:39:21.701: INFO: Pod "downwardapi-volume-d09c12c2-1d7e-4133-b5aa-c16908887063": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00876403s Mar 22 23:39:23.705: INFO: Pod "downwardapi-volume-d09c12c2-1d7e-4133-b5aa-c16908887063": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01317477s STEP: Saw pod success Mar 22 23:39:23.705: INFO: Pod "downwardapi-volume-d09c12c2-1d7e-4133-b5aa-c16908887063" satisfied condition "Succeeded or Failed" Mar 22 23:39:23.708: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d09c12c2-1d7e-4133-b5aa-c16908887063 container client-container: STEP: delete the pod Mar 22 23:39:23.739: INFO: Waiting for pod downwardapi-volume-d09c12c2-1d7e-4133-b5aa-c16908887063 to disappear Mar 22 23:39:23.744: INFO: Pod downwardapi-volume-d09c12c2-1d7e-4133-b5aa-c16908887063 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:39:23.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9519" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":393,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:39:23.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:39:23.789: INFO: Creating ReplicaSet my-hostname-basic-30214d2b-b616-448d-b17c-fee736b113e7 Mar 22 23:39:23.827: INFO: Pod name my-hostname-basic-30214d2b-b616-448d-b17c-fee736b113e7: Found 0 pods out of 1 Mar 22 23:39:28.830: INFO: Pod name my-hostname-basic-30214d2b-b616-448d-b17c-fee736b113e7: Found 1 pods out of 1 Mar 22 23:39:28.830: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-30214d2b-b616-448d-b17c-fee736b113e7" is running Mar 22 23:39:28.833: INFO: Pod "my-hostname-basic-30214d2b-b616-448d-b17c-fee736b113e7-psl7h" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 23:39:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 23:39:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 23:39:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 23:39:23 +0000 UTC Reason: Message:}]) Mar 22 23:39:28.833: INFO: Trying to dial the pod Mar 22 23:39:33.846: INFO: Controller my-hostname-basic-30214d2b-b616-448d-b17c-fee736b113e7: Got expected result from replica 1 [my-hostname-basic-30214d2b-b616-448d-b17c-fee736b113e7-psl7h]: "my-hostname-basic-30214d2b-b616-448d-b17c-fee736b113e7-psl7h", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:39:33.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4204" for this suite. • [SLOW TEST:10.102 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":21,"skipped":394,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:39:33.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-c4fda9c7-b1d3-40d8-a3ef-c32ed3440575 STEP: Creating a pod to test consume secrets Mar 22 23:39:33.986: INFO: Waiting up to 5m0s for pod "pod-secrets-81eefd74-e4a5-4186-b91d-f26bdd923615" in namespace "secrets-9240" to be "Succeeded or Failed" Mar 22 23:39:34.063: INFO: Pod "pod-secrets-81eefd74-e4a5-4186-b91d-f26bdd923615": Phase="Pending", Reason="", readiness=false. Elapsed: 77.543388ms Mar 22 23:39:36.099: INFO: Pod "pod-secrets-81eefd74-e4a5-4186-b91d-f26bdd923615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11362183s Mar 22 23:39:38.103: INFO: Pod "pod-secrets-81eefd74-e4a5-4186-b91d-f26bdd923615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117720761s STEP: Saw pod success Mar 22 23:39:38.103: INFO: Pod "pod-secrets-81eefd74-e4a5-4186-b91d-f26bdd923615" satisfied condition "Succeeded or Failed" Mar 22 23:39:38.106: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-81eefd74-e4a5-4186-b91d-f26bdd923615 container secret-volume-test: STEP: delete the pod Mar 22 23:39:38.153: INFO: Waiting for pod pod-secrets-81eefd74-e4a5-4186-b91d-f26bdd923615 to disappear Mar 22 23:39:38.163: INFO: Pod pod-secrets-81eefd74-e4a5-4186-b91d-f26bdd923615 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:39:38.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9240" for this suite. STEP: Destroying namespace "secret-namespace-7926" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":397,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:39:38.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-afa0ff08-78db-4765-9857-ac98307d72b2 STEP: Creating a pod to test consume secrets Mar 22 23:39:38.308: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8bb25dfa-5849-49de-b33a-b4873460890f" in namespace "projected-1106" to be "Succeeded or Failed" Mar 22 23:39:38.328: INFO: Pod "pod-projected-secrets-8bb25dfa-5849-49de-b33a-b4873460890f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.802593ms Mar 22 23:39:40.332: INFO: Pod "pod-projected-secrets-8bb25dfa-5849-49de-b33a-b4873460890f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023554751s Mar 22 23:39:42.336: INFO: Pod "pod-projected-secrets-8bb25dfa-5849-49de-b33a-b4873460890f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027597166s STEP: Saw pod success Mar 22 23:39:42.336: INFO: Pod "pod-projected-secrets-8bb25dfa-5849-49de-b33a-b4873460890f" satisfied condition "Succeeded or Failed" Mar 22 23:39:42.339: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-8bb25dfa-5849-49de-b33a-b4873460890f container projected-secret-volume-test: STEP: delete the pod Mar 22 23:39:42.370: INFO: Waiting for pod pod-projected-secrets-8bb25dfa-5849-49de-b33a-b4873460890f to disappear Mar 22 23:39:42.391: INFO: Pod pod-projected-secrets-8bb25dfa-5849-49de-b33a-b4873460890f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:39:42.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1106" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":403,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:39:42.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1640.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1640.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 23:39:48.533: INFO: DNS probes using dns-1640/dns-test-41f854c8-5542-4c44-ab69-d6a82827e215 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:39:48.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1640" for this suite. • [SLOW TEST:6.195 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":24,"skipped":406,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:39:48.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 22 23:39:53.168: INFO: Successfully updated pod "annotationupdate85ddca86-886b-4ffa-89d8-f525e74c37d4" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:39:57.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4862" for this suite. • [SLOW TEST:8.639 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":414,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:39:57.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 22 23:39:57.297: INFO: Waiting up to 5m0s for pod "downward-api-a03eaeb6-a2a8-487b-9f74-7e34e5c12df2" in namespace "downward-api-7984" to be "Succeeded or Failed" Mar 22 23:39:57.299: INFO: Pod "downward-api-a03eaeb6-a2a8-487b-9f74-7e34e5c12df2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3675ms Mar 22 23:39:59.315: INFO: Pod "downward-api-a03eaeb6-a2a8-487b-9f74-7e34e5c12df2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018584028s Mar 22 23:40:01.320: INFO: Pod "downward-api-a03eaeb6-a2a8-487b-9f74-7e34e5c12df2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02286509s STEP: Saw pod success Mar 22 23:40:01.320: INFO: Pod "downward-api-a03eaeb6-a2a8-487b-9f74-7e34e5c12df2" satisfied condition "Succeeded or Failed" Mar 22 23:40:01.323: INFO: Trying to get logs from node latest-worker pod downward-api-a03eaeb6-a2a8-487b-9f74-7e34e5c12df2 container dapi-container: STEP: delete the pod Mar 22 23:40:01.339: INFO: Waiting for pod downward-api-a03eaeb6-a2a8-487b-9f74-7e34e5c12df2 to disappear Mar 22 23:40:01.344: INFO: Pod downward-api-a03eaeb6-a2a8-487b-9f74-7e34e5c12df2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:01.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7984" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":421,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:01.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 22 23:40:01.430: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dcbe8e2-f108-449d-b4d5-41abbf2d5d4d" in namespace "projected-8423" to be "Succeeded or Failed" Mar 22 23:40:01.445: INFO: Pod "downwardapi-volume-3dcbe8e2-f108-449d-b4d5-41abbf2d5d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.644819ms Mar 22 23:40:03.449: INFO: Pod "downwardapi-volume-3dcbe8e2-f108-449d-b4d5-41abbf2d5d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019394621s Mar 22 23:40:05.452: INFO: Pod "downwardapi-volume-3dcbe8e2-f108-449d-b4d5-41abbf2d5d4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022515621s STEP: Saw pod success Mar 22 23:40:05.452: INFO: Pod "downwardapi-volume-3dcbe8e2-f108-449d-b4d5-41abbf2d5d4d" satisfied condition "Succeeded or Failed" Mar 22 23:40:05.455: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3dcbe8e2-f108-449d-b4d5-41abbf2d5d4d container client-container: STEP: delete the pod Mar 22 23:40:05.485: INFO: Waiting for pod downwardapi-volume-3dcbe8e2-f108-449d-b4d5-41abbf2d5d4d to disappear Mar 22 23:40:05.493: INFO: Pod downwardapi-volume-3dcbe8e2-f108-449d-b4d5-41abbf2d5d4d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:05.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8423" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":422,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:05.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 22 23:40:05.567: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87cb4693-4724-4f48-a71c-40a642153402" in namespace "projected-2169" to be "Succeeded or Failed" Mar 22 23:40:05.571: INFO: Pod "downwardapi-volume-87cb4693-4724-4f48-a71c-40a642153402": Phase="Pending", Reason="", readiness=false. Elapsed: 3.433943ms Mar 22 23:40:07.574: INFO: Pod "downwardapi-volume-87cb4693-4724-4f48-a71c-40a642153402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006343365s Mar 22 23:40:09.578: INFO: Pod "downwardapi-volume-87cb4693-4724-4f48-a71c-40a642153402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010312392s STEP: Saw pod success Mar 22 23:40:09.578: INFO: Pod "downwardapi-volume-87cb4693-4724-4f48-a71c-40a642153402" satisfied condition "Succeeded or Failed" Mar 22 23:40:09.580: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-87cb4693-4724-4f48-a71c-40a642153402 container client-container: STEP: delete the pod Mar 22 23:40:09.609: INFO: Waiting for pod downwardapi-volume-87cb4693-4724-4f48-a71c-40a642153402 to disappear Mar 22 23:40:09.625: INFO: Pod downwardapi-volume-87cb4693-4724-4f48-a71c-40a642153402 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:09.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2169" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":430,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:09.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:13.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8450" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:13.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Mar 22 23:40:13.783: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:13.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4038" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":30,"skipped":452,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:13.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:40:14.014: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 22 23:40:19.022: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 22 23:40:19.022: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 22 23:40:19.058: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3745 /apis/apps/v1/namespaces/deployment-3745/deployments/test-cleanup-deployment 9c2174d9-bcd3-4017-a30c-d3c64d5204ba 2003544 1 2020-03-22 23:40:19 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b670d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 22 23:40:19.087: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-3745 /apis/apps/v1/namespaces/deployment-3745/replicasets/test-cleanup-deployment-577c77b589 8bf86fb6-22b9-4d9f-b417-bb6d16d3c917 2003546 1 2020-03-22 23:40:19 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 9c2174d9-bcd3-4017-a30c-d3c64d5204ba 0xc002b67697 0xc002b67698}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b67708 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 22 23:40:19.087: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 22 23:40:19.087: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3745 /apis/apps/v1/namespaces/deployment-3745/replicasets/test-cleanup-controller 207d7533-75ff-4426-a43a-badc732b3977 2003545 1 2020-03-22 23:40:13 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 9c2174d9-bcd3-4017-a30c-d3c64d5204ba 0xc002b675c7 0xc002b675c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b67628 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 22 23:40:19.160: INFO: Pod "test-cleanup-controller-b8klq" is available: &Pod{ObjectMeta:{test-cleanup-controller-b8klq test-cleanup-controller- deployment-3745 /api/v1/namespaces/deployment-3745/pods/test-cleanup-controller-b8klq 08204628-f238-4201-acd2-13b29b7a833b 2003522 0 2020-03-22 23:40:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 207d7533-75ff-4426-a43a-badc732b3977 0xc002b67dc7 0xc002b67dc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b5frb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b5frb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b5frb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:40:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:40:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:40:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:40:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.8,StartTime:2020-03-22 23:40:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-22 23:40:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://12e3da0c52a23a43a768976daee9b053e5969bd43f3fa394788eac92037e4c3d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:40:19.160: INFO: Pod "test-cleanup-deployment-577c77b589-xs9lp" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-xs9lp test-cleanup-deployment-577c77b589- deployment-3745 /api/v1/namespaces/deployment-3745/pods/test-cleanup-deployment-577c77b589-xs9lp 952f41b8-9571-4379-a14c-4941ae397b92 2003552 0 2020-03-22 23:40:19 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 8bf86fb6-22b9-4d9f-b417-bb6d16d3c917 0xc002b67f57 0xc002b67f58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b5frb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b5frb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b5frb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:40:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:19.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3745" for this suite. • [SLOW TEST:5.293 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":31,"skipped":471,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:19.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 22 23:40:19.300: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee567002-129a-447c-969f-fd6fbf2dcd81" in namespace "projected-8681" to be "Succeeded or Failed" Mar 22 23:40:19.303: INFO: Pod "downwardapi-volume-ee567002-129a-447c-969f-fd6fbf2dcd81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.70061ms Mar 22 23:40:21.306: INFO: Pod "downwardapi-volume-ee567002-129a-447c-969f-fd6fbf2dcd81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00666098s Mar 22 23:40:23.311: INFO: Pod "downwardapi-volume-ee567002-129a-447c-969f-fd6fbf2dcd81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010717747s STEP: Saw pod success Mar 22 23:40:23.311: INFO: Pod "downwardapi-volume-ee567002-129a-447c-969f-fd6fbf2dcd81" satisfied condition "Succeeded or Failed" Mar 22 23:40:23.314: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ee567002-129a-447c-969f-fd6fbf2dcd81 container client-container: STEP: delete the pod Mar 22 23:40:23.358: INFO: Waiting for pod downwardapi-volume-ee567002-129a-447c-969f-fd6fbf2dcd81 to disappear Mar 22 23:40:23.374: INFO: Pod downwardapi-volume-ee567002-129a-447c-969f-fd6fbf2dcd81 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:23.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8681" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":480,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:23.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:39.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4787" for this suite. • [SLOW TEST:16.250 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":33,"skipped":480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:39.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 23:40:40.175: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 23:40:42.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517240, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517240, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517240, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517240, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 23:40:45.202: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 22 23:40:45.218: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:45.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1772" for this suite. STEP: Destroying namespace "webhook-1772-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.712 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":34,"skipped":560,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:45.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 22 23:40:48.490: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:48.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7296" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:48.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 22 23:40:52.659: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3017 PodName:pod-sharedvolume-3505b2a8-81bb-4a4a-947b-fd4efdd3eca2 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:40:52.659: INFO: >>> kubeConfig: /root/.kube/config I0322 23:40:52.694122 7 log.go:172] (0xc002d3ea50) (0xc001a5d900) Create stream I0322 23:40:52.694168 7 log.go:172] (0xc002d3ea50) (0xc001a5d900) Stream added, broadcasting: 1 I0322 23:40:52.696715 7 log.go:172] (0xc002d3ea50) Reply frame received for 1 I0322 23:40:52.696758 7 log.go:172] (0xc002d3ea50) (0xc0011f7040) Create stream I0322 23:40:52.696772 7 log.go:172] (0xc002d3ea50) (0xc0011f7040) Stream added, broadcasting: 3 I0322 23:40:52.698004 7 log.go:172] (0xc002d3ea50) Reply frame received for 3 I0322 23:40:52.698041 7 log.go:172] (0xc002d3ea50) (0xc001ad6be0) Create stream I0322 23:40:52.698055 7 log.go:172] (0xc002d3ea50) (0xc001ad6be0) Stream added, broadcasting: 5 I0322 23:40:52.699036 7 log.go:172] (0xc002d3ea50) Reply frame received for 5 I0322 23:40:52.761361 7 log.go:172] (0xc002d3ea50) Data frame received for 5 I0322 23:40:52.761401 7 log.go:172] (0xc001ad6be0) (5) Data frame handling I0322 23:40:52.761441 7 log.go:172] (0xc002d3ea50) Data frame received for 3 I0322 23:40:52.761464 7 log.go:172] (0xc0011f7040) (3) Data frame handling I0322 23:40:52.761484 7 log.go:172] (0xc0011f7040) (3) Data frame sent I0322 23:40:52.761498 7 log.go:172] (0xc002d3ea50) Data frame received for 3 I0322 23:40:52.761511 7 log.go:172] (0xc0011f7040) (3) Data frame handling I0322 23:40:52.762761 7 log.go:172] (0xc002d3ea50) Data frame received for 1 I0322 23:40:52.762795 7 log.go:172] (0xc001a5d900) (1) Data frame handling I0322 23:40:52.762824 7 log.go:172] (0xc001a5d900) (1) Data frame sent I0322 23:40:52.762845 7 log.go:172] (0xc002d3ea50) (0xc001a5d900) Stream removed, broadcasting: 1 I0322 23:40:52.762950 7 log.go:172] (0xc002d3ea50) (0xc001a5d900) Stream removed, broadcasting: 1 I0322 23:40:52.762972 7 log.go:172] (0xc002d3ea50) (0xc0011f7040) Stream removed, broadcasting: 3 I0322 23:40:52.762989 7 log.go:172] (0xc002d3ea50) (0xc001ad6be0) Stream removed, broadcasting: 5 Mar 22 23:40:52.763: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:52.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0322 23:40:52.763389 7 log.go:172] (0xc002d3ea50) Go away received STEP: Destroying namespace "emptydir-3017" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":36,"skipped":599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:52.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-eaf7430a-eaf0-4a41-afd0-b82fc1efdf63 STEP: Creating a pod to test consume configMaps Mar 22 23:40:52.891: INFO: Waiting up to 5m0s for pod "pod-configmaps-624721a1-54bd-4f77-916b-de0dff907e1a" in namespace "configmap-1780" to be "Succeeded or Failed" Mar 22 23:40:52.896: INFO: Pod "pod-configmaps-624721a1-54bd-4f77-916b-de0dff907e1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.608708ms Mar 22 23:40:54.900: INFO: Pod "pod-configmaps-624721a1-54bd-4f77-916b-de0dff907e1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008433658s Mar 22 23:40:56.904: INFO: Pod "pod-configmaps-624721a1-54bd-4f77-916b-de0dff907e1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012259864s STEP: Saw pod success Mar 22 23:40:56.904: INFO: Pod "pod-configmaps-624721a1-54bd-4f77-916b-de0dff907e1a" satisfied condition "Succeeded or Failed" Mar 22 23:40:56.907: INFO: Trying to get logs from node latest-worker pod pod-configmaps-624721a1-54bd-4f77-916b-de0dff907e1a container configmap-volume-test: STEP: delete the pod Mar 22 23:40:56.976: INFO: Waiting for pod pod-configmaps-624721a1-54bd-4f77-916b-de0dff907e1a to disappear Mar 22 23:40:56.980: INFO: Pod pod-configmaps-624721a1-54bd-4f77-916b-de0dff907e1a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:40:56.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1780" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":633,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:40:56.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 22 23:41:07.130: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 23:41:07.136: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 23:41:09.136: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 23:41:09.141: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 23:41:11.136: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 23:41:11.141: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:41:11.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7720" for this suite. • [SLOW TEST:14.168 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":635,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:41:11.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:41:11.214: INFO: The status of Pod test-webserver-ece3c6ef-a909-45b3-8fb8-730d1d416534 is Pending, waiting for it to be Running (with Ready = true) Mar 22 23:41:13.218: INFO: The status of Pod test-webserver-ece3c6ef-a909-45b3-8fb8-730d1d416534 is Pending, waiting for it to be Running (with Ready = true) Mar 22 23:41:15.218: INFO: The status of Pod test-webserver-ece3c6ef-a909-45b3-8fb8-730d1d416534 is Running (Ready = false) Mar 22 23:41:17.234: INFO: The status of Pod test-webserver-ece3c6ef-a909-45b3-8fb8-730d1d416534 is Running (Ready = false) Mar 22 23:41:19.218: INFO: The status of Pod test-webserver-ece3c6ef-a909-45b3-8fb8-730d1d416534 is Running (Ready = false) Mar 22 23:41:21.217: INFO: The status of Pod test-webserver-ece3c6ef-a909-45b3-8fb8-730d1d416534 is Running (Ready = false) Mar 22 23:41:23.218: INFO: The status of Pod test-webserver-ece3c6ef-a909-45b3-8fb8-730d1d416534 is Running (Ready = false) Mar 22 23:41:25.218: INFO: The status of Pod test-webserver-ece3c6ef-a909-45b3-8fb8-730d1d416534 is Running (Ready = false) Mar 22 23:41:27.218: INFO: The status of Pod test-webserver-ece3c6ef-a909-45b3-8fb8-730d1d416534 is Running (Ready = false) Mar 22 23:41:29.217: INFO: The status of Pod test-webserver-ece3c6ef-a909-45b3-8fb8-730d1d416534 is Running (Ready = true) Mar 22 23:41:29.221: INFO: Container started at 2020-03-22 23:41:13 +0000 UTC, pod became ready at 2020-03-22 23:41:28 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:41:29.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2252" for this suite. • [SLOW TEST:18.073 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":647,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:41:29.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Mar 22 23:41:29.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-7630 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 22 23:41:29.399: INFO: stderr: "" Mar 22 23:41:29.399: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Mar 22 23:41:29.399: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 22 23:41:29.399: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7630" to be "running and ready, or succeeded" Mar 22 23:41:29.423: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 23.57054ms Mar 22 23:41:31.427: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027594565s Mar 22 23:41:33.431: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.031749019s Mar 22 23:41:33.431: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 22 23:41:33.431: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 22 23:41:33.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7630' Mar 22 23:41:33.556: INFO: stderr: "" Mar 22 23:41:33.556: INFO: stdout: "I0322 23:41:31.487816 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/6455 594\nI0322 23:41:31.687978 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/rqc 233\nI0322 23:41:31.887982 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/7dl 308\nI0322 23:41:32.088114 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/mv7r 223\nI0322 23:41:32.288036 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/292p 315\nI0322 23:41:32.488020 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/7j9 535\nI0322 23:41:32.688003 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/fm7 444\nI0322 23:41:32.888019 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/4gm 410\nI0322 23:41:33.087980 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/z6f 325\nI0322 23:41:33.288038 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/fdk 403\nI0322 23:41:33.487988 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/tvcr 527\n" STEP: limiting log lines Mar 22 23:41:33.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7630 --tail=1' Mar 22 23:41:33.672: INFO: stderr: "" Mar 22 23:41:33.672: INFO: stdout: "I0322 23:41:33.487988 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/tvcr 527\n" Mar 22 23:41:33.672: INFO: got output "I0322 23:41:33.487988 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/tvcr 527\n" STEP: limiting log bytes Mar 22 23:41:33.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7630 --limit-bytes=1' Mar 22 23:41:33.789: INFO: stderr: "" Mar 22 23:41:33.789: INFO: stdout: "I" Mar 22 23:41:33.789: INFO: got output "I" STEP: exposing timestamps Mar 22 23:41:33.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7630 --tail=1 --timestamps' Mar 22 23:41:33.886: INFO: stderr: "" Mar 22 23:41:33.886: INFO: stdout: "2020-03-22T23:41:33.688199687Z I0322 23:41:33.687997 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/l6rk 490\n" Mar 22 23:41:33.886: INFO: got output "2020-03-22T23:41:33.688199687Z I0322 23:41:33.687997 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/l6rk 490\n" STEP: restricting to a time range Mar 22 23:41:36.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7630 --since=1s' Mar 22 23:41:36.500: INFO: stderr: "" Mar 22 23:41:36.500: INFO: stdout: "I0322 23:41:35.688033 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/ggb 304\nI0322 23:41:35.887982 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/r5m 240\nI0322 23:41:36.087983 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/xsr 269\nI0322 23:41:36.287964 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/rqw 272\nI0322 23:41:36.487994 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/qttl 359\n" Mar 22 23:41:36.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7630 --since=24h' Mar 22 23:41:36.600: INFO: stderr: "" Mar 22 23:41:36.600: INFO: stdout: "I0322 23:41:31.487816 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/6455 594\nI0322 23:41:31.687978 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/rqc 233\nI0322 23:41:31.887982 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/7dl 308\nI0322 23:41:32.088114 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/mv7r 223\nI0322 23:41:32.288036 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/292p 315\nI0322 23:41:32.488020 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/7j9 535\nI0322 23:41:32.688003 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/fm7 444\nI0322 23:41:32.888019 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/4gm 410\nI0322 23:41:33.087980 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/z6f 325\nI0322 23:41:33.288038 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/fdk 403\nI0322 23:41:33.487988 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/tvcr 527\nI0322 23:41:33.687997 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/l6rk 490\nI0322 23:41:33.887970 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/jsj 300\nI0322 23:41:34.087969 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/7xt6 236\nI0322 23:41:34.288028 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/d88g 254\nI0322 23:41:34.487995 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/2ds 237\nI0322 23:41:34.687978 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/q5b 382\nI0322 23:41:34.887990 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/xd7 380\nI0322 23:41:35.087998 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/z5sd 433\nI0322 23:41:35.288025 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/rsq 205\nI0322 23:41:35.487975 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/gz8z 561\nI0322 23:41:35.688033 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/ggb 304\nI0322 23:41:35.887982 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/r5m 240\nI0322 23:41:36.087983 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/xsr 269\nI0322 23:41:36.287964 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/rqw 272\nI0322 23:41:36.487994 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/qttl 359\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Mar 22 23:41:36.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7630' Mar 22 23:41:43.040: INFO: stderr: "" Mar 22 23:41:43.040: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:41:43.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7630" for this suite. • [SLOW TEST:13.817 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":40,"skipped":648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:41:43.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 22 23:41:43.115: INFO: Waiting up to 5m0s for pod "downward-api-548610c2-2838-4693-a33c-1779500d7583" in namespace "downward-api-8096" to be "Succeeded or Failed" Mar 22 23:41:43.136: INFO: Pod "downward-api-548610c2-2838-4693-a33c-1779500d7583": Phase="Pending", Reason="", readiness=false. Elapsed: 21.32021ms Mar 22 23:41:45.140: INFO: Pod "downward-api-548610c2-2838-4693-a33c-1779500d7583": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025345777s Mar 22 23:41:47.145: INFO: Pod "downward-api-548610c2-2838-4693-a33c-1779500d7583": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02985947s STEP: Saw pod success Mar 22 23:41:47.145: INFO: Pod "downward-api-548610c2-2838-4693-a33c-1779500d7583" satisfied condition "Succeeded or Failed" Mar 22 23:41:47.148: INFO: Trying to get logs from node latest-worker2 pod downward-api-548610c2-2838-4693-a33c-1779500d7583 container dapi-container: STEP: delete the pod Mar 22 23:41:47.183: INFO: Waiting for pod downward-api-548610c2-2838-4693-a33c-1779500d7583 to disappear Mar 22 23:41:47.196: INFO: Pod downward-api-548610c2-2838-4693-a33c-1779500d7583 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:41:47.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8096" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":679,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:41:47.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-035fc86a-dee9-47da-8c95-369dad7db6a0 STEP: Creating a pod to test consume configMaps Mar 22 23:41:47.307: INFO: Waiting up to 5m0s for pod "pod-configmaps-71eee446-3328-4347-9d9e-f0c2b4561bbd" in namespace "configmap-7973" to be "Succeeded or Failed" Mar 22 23:41:47.310: INFO: Pod "pod-configmaps-71eee446-3328-4347-9d9e-f0c2b4561bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.620386ms Mar 22 23:41:49.313: INFO: Pod "pod-configmaps-71eee446-3328-4347-9d9e-f0c2b4561bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006665333s Mar 22 23:41:51.317: INFO: Pod "pod-configmaps-71eee446-3328-4347-9d9e-f0c2b4561bbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010399994s STEP: Saw pod success Mar 22 23:41:51.317: INFO: Pod "pod-configmaps-71eee446-3328-4347-9d9e-f0c2b4561bbd" satisfied condition "Succeeded or Failed" Mar 22 23:41:51.320: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-71eee446-3328-4347-9d9e-f0c2b4561bbd container configmap-volume-test: STEP: delete the pod Mar 22 23:41:51.351: INFO: Waiting for pod pod-configmaps-71eee446-3328-4347-9d9e-f0c2b4561bbd to disappear Mar 22 23:41:51.364: INFO: Pod pod-configmaps-71eee446-3328-4347-9d9e-f0c2b4561bbd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:41:51.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7973" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":692,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:41:51.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0322 23:41:52.563254 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 22 23:41:52.563: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:41:52.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9897" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":43,"skipped":701,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:41:52.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5141.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5141.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5141.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5141.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5141.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5141.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 23:41:58.740: INFO: DNS probes using dns-5141/dns-test-250ab58f-184d-453b-adf6-9a35beb1060d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:41:58.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5141" for this suite. • [SLOW TEST:6.309 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":44,"skipped":702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:41:58.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 23:41:59.567: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 23:42:01.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517319, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517319, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517319, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517319, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 23:42:04.635: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:42:04.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4443" for this suite. STEP: Destroying namespace "webhook-4443-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.953 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":45,"skipped":728,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:42:04.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 22 23:42:05.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96de701d-650e-46e8-a019-b2214cd3249f" in namespace "projected-9781" to be "Succeeded or Failed" Mar 22 23:42:05.324: INFO: Pod "downwardapi-volume-96de701d-650e-46e8-a019-b2214cd3249f": Phase="Pending", Reason="", readiness=false. Elapsed: 58.878772ms Mar 22 23:42:07.383: INFO: Pod "downwardapi-volume-96de701d-650e-46e8-a019-b2214cd3249f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117439202s Mar 22 23:42:09.388: INFO: Pod "downwardapi-volume-96de701d-650e-46e8-a019-b2214cd3249f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122181476s STEP: Saw pod success Mar 22 23:42:09.388: INFO: Pod "downwardapi-volume-96de701d-650e-46e8-a019-b2214cd3249f" satisfied condition "Succeeded or Failed" Mar 22 23:42:09.391: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-96de701d-650e-46e8-a019-b2214cd3249f container client-container: STEP: delete the pod Mar 22 23:42:09.415: INFO: Waiting for pod downwardapi-volume-96de701d-650e-46e8-a019-b2214cd3249f to disappear Mar 22 23:42:09.443: INFO: Pod downwardapi-volume-96de701d-650e-46e8-a019-b2214cd3249f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:42:09.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9781" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":730,"failed":0} ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:42:09.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:42:09.587: INFO: Creating deployment "test-recreate-deployment" Mar 22 23:42:09.610: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 22 23:42:09.617: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 22 23:42:11.625: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 22 23:42:11.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517329, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517329, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517329, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720517329, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 23:42:13.631: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 22 23:42:13.671: INFO: Updating deployment test-recreate-deployment Mar 22 23:42:13.671: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 22 23:42:14.206: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8664 /apis/apps/v1/namespaces/deployment-8664/deployments/test-recreate-deployment 164b22c8-174e-4691-b074-d1a328a79f29 2004458 2 2020-03-22 23:42:09 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002aed578 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-22 23:42:13 +0000 UTC,LastTransitionTime:2020-03-22 23:42:13 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-22 23:42:13 +0000 UTC,LastTransitionTime:2020-03-22 23:42:09 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 22 23:42:14.270: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-8664 /apis/apps/v1/namespaces/deployment-8664/replicasets/test-recreate-deployment-5f94c574ff ab00033c-e587-4c3b-9f56-3e0100d62e02 2004455 1 2020-03-22 23:42:13 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 164b22c8-174e-4691-b074-d1a328a79f29 0xc0028e23e7 0xc0028e23e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028e2498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 22 23:42:14.270: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 22 23:42:14.270: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-8664 /apis/apps/v1/namespaces/deployment-8664/replicasets/test-recreate-deployment-846c7dd955 8796995f-1628-49ee-a7ea-2b3e947c5c80 2004447 2 2020-03-22 23:42:09 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 164b22c8-174e-4691-b074-d1a328a79f29 0xc0028e2537 0xc0028e2538}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028e25b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 22 23:42:14.274: INFO: Pod "test-recreate-deployment-5f94c574ff-vf8tz" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-vf8tz test-recreate-deployment-5f94c574ff- deployment-8664 /api/v1/namespaces/deployment-8664/pods/test-recreate-deployment-5f94c574ff-vf8tz 8ba6dc27-e3ab-4124-ba9e-9dbb41362146 2004459 0 2020-03-22 23:42:13 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff ab00033c-e587-4c3b-9f56-3e0100d62e02 0xc002aedbe7 0xc002aedbe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2bgqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2bgqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2bgqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-22 23:42:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:42:14.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8664" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":47,"skipped":730,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:42:14.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-aaa0d810-128f-4f37-b93b-8b315b0d117d STEP: Creating a pod to test consume secrets Mar 22 23:42:14.422: INFO: Waiting up to 5m0s for pod "pod-secrets-4528df59-15c4-4dba-a274-d1388b7e5f2c" in namespace "secrets-2838" to be "Succeeded or Failed" Mar 22 23:42:14.437: INFO: Pod "pod-secrets-4528df59-15c4-4dba-a274-d1388b7e5f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.062069ms Mar 22 23:42:16.605: INFO: Pod "pod-secrets-4528df59-15c4-4dba-a274-d1388b7e5f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182982611s Mar 22 23:42:18.609: INFO: Pod "pod-secrets-4528df59-15c4-4dba-a274-d1388b7e5f2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186798121s STEP: Saw pod success Mar 22 23:42:18.609: INFO: Pod "pod-secrets-4528df59-15c4-4dba-a274-d1388b7e5f2c" satisfied condition "Succeeded or Failed" Mar 22 23:42:18.612: INFO: Trying to get logs from node latest-worker pod pod-secrets-4528df59-15c4-4dba-a274-d1388b7e5f2c container secret-env-test: STEP: delete the pod Mar 22 23:42:18.655: INFO: Waiting for pod pod-secrets-4528df59-15c4-4dba-a274-d1388b7e5f2c to disappear Mar 22 23:42:18.664: INFO: Pod pod-secrets-4528df59-15c4-4dba-a274-d1388b7e5f2c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:42:18.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2838" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":734,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:42:18.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 22 23:42:18.804: INFO: Waiting up to 5m0s for pod "pod-b12b5837-05ae-43e5-bff8-46ee840f214d" in namespace "emptydir-3148" to be "Succeeded or Failed" Mar 22 23:42:18.820: INFO: Pod "pod-b12b5837-05ae-43e5-bff8-46ee840f214d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.805819ms Mar 22 23:42:20.824: INFO: Pod "pod-b12b5837-05ae-43e5-bff8-46ee840f214d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019880416s Mar 22 23:42:22.828: INFO: Pod "pod-b12b5837-05ae-43e5-bff8-46ee840f214d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024065677s STEP: Saw pod success Mar 22 23:42:22.828: INFO: Pod "pod-b12b5837-05ae-43e5-bff8-46ee840f214d" satisfied condition "Succeeded or Failed" Mar 22 23:42:22.831: INFO: Trying to get logs from node latest-worker2 pod pod-b12b5837-05ae-43e5-bff8-46ee840f214d container test-container: STEP: delete the pod Mar 22 23:42:22.862: INFO: Waiting for pod pod-b12b5837-05ae-43e5-bff8-46ee840f214d to disappear Mar 22 23:42:22.875: INFO: Pod pod-b12b5837-05ae-43e5-bff8-46ee840f214d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:42:22.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3148" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":747,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:42:22.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:42:22.953: INFO: Creating deployment "webserver-deployment" Mar 22 23:42:22.957: INFO: Waiting for observed generation 1 Mar 22 23:42:24.968: INFO: Waiting for all required pods to come up Mar 22 23:42:24.973: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 22 23:42:32.982: INFO: Waiting for deployment "webserver-deployment" to complete Mar 22 23:42:32.988: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 22 23:42:32.996: INFO: Updating deployment webserver-deployment Mar 22 23:42:32.996: INFO: Waiting for observed generation 2 Mar 22 23:42:35.003: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 22 23:42:35.005: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 22 23:42:35.008: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 22 23:42:35.016: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 22 23:42:35.016: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 22 23:42:35.018: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 22 23:42:35.023: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 22 23:42:35.023: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 22 23:42:35.027: INFO: Updating deployment webserver-deployment Mar 22 23:42:35.027: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 22 23:42:35.072: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 22 23:42:35.085: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 22 23:42:35.245: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3193 /apis/apps/v1/namespaces/deployment-3193/deployments/webserver-deployment 5e4d7deb-4bb7-4bb3-b735-a16012ab4bd5 2004788 3 2020-03-22 23:42:22 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ce6c98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-22 23:42:33 +0000 UTC,LastTransitionTime:2020-03-22 23:42:22 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-22 23:42:35 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 22 23:42:35.402: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-3193 /apis/apps/v1/namespaces/deployment-3193/replicasets/webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 2004847 3 2020-03-22 23:42:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5e4d7deb-4bb7-4bb3-b735-a16012ab4bd5 0xc002ce75c7 0xc002ce75c8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ce7638 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 22 23:42:35.402: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 22 23:42:35.402: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-3193 /apis/apps/v1/namespaces/deployment-3193/replicasets/webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 2004842 3 2020-03-22 23:42:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5e4d7deb-4bb7-4bb3-b735-a16012ab4bd5 0xc002ce7507 0xc002ce7508}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ce7568 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 22 23:42:35.484: INFO: Pod "webserver-deployment-595b5b9587-8bds7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8bds7 webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-8bds7 77885201-e262-4a89-8053-82ef813d81d9 2004827 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002ce7b67 0xc002ce7b68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.484: INFO: Pod "webserver-deployment-595b5b9587-9s6vk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9s6vk webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-9s6vk c122ecb2-3194-4520-88e1-ef0ed5fe1513 2004848 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002ce7c87 0xc002ce7c88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-22 23:42:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.484: INFO: Pod "webserver-deployment-595b5b9587-bws95" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bws95 webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-bws95 e4c81d13-fc5f-4be8-8fe9-c054caae083f 2004675 0 2020-03-22 23:42:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002ce7e67 0xc002ce7e68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.21,StartTime:2020-03-22 23:42:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-22 23:42:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b80b2e845a874405b666925bb0a83969fe0093d2f2b00590dce834f69a17ce7c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.485: INFO: Pod "webserver-deployment-595b5b9587-cwp7f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cwp7f webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-cwp7f d7809067-27f5-4cdb-9a8d-096939f7c22b 2004832 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa8257 0xc002fa8258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.485: INFO: Pod "webserver-deployment-595b5b9587-ffhzt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ffhzt webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-ffhzt 48070d3b-ed4f-4503-96d7-0fab3bc6a41e 2004814 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa83a7 0xc002fa83a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.485: INFO: Pod "webserver-deployment-595b5b9587-frj7g" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-frj7g webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-frj7g 96b2d67e-99ed-4a5b-b17c-dba450bbd6ed 2004665 0 2020-03-22 23:42:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa85a7 0xc002fa85a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.157,StartTime:2020-03-22 23:42:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-22 23:42:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d55d3674e403cbd253ce46dec645b61eb39c06ca32490ae09463701c0e79da7d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.485: INFO: Pod "webserver-deployment-595b5b9587-fxc76" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fxc76 webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-fxc76 9860c391-7969-459f-ac9a-713df20ef438 2004804 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa8747 0xc002fa8748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.485: INFO: Pod "webserver-deployment-595b5b9587-gl5bj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gl5bj webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-gl5bj cd88cfdb-f634-4a53-8bc3-26feea359276 2004795 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa8877 0xc002fa8878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.486: INFO: Pod "webserver-deployment-595b5b9587-kkjsc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kkjsc webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-kkjsc 193e6297-93ba-4d9f-b07b-c917051655ba 2004831 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa8997 0xc002fa8998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.486: INFO: Pod "webserver-deployment-595b5b9587-m77z2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m77z2 webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-m77z2 1372e97e-9b2a-4c37-bbb5-3f57dfb3e427 2004688 0 2020-03-22 23:42:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa8af7 0xc002fa8af8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.158,StartTime:2020-03-22 23:42:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-22 23:42:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://91eeb6dbf6740b33c91ae2f415f2cff47a133840405a3e35dcbb319562052add,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.486: INFO: Pod "webserver-deployment-595b5b9587-mp6vv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mp6vv webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-mp6vv 56bc24d5-3178-4d6c-9657-701e65296708 2004806 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa8d77 0xc002fa8d78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.486: INFO: Pod "webserver-deployment-595b5b9587-nbclv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nbclv webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-nbclv eb028401-8cb7-4f0b-b627-97e3960a5ec4 2004833 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa8e97 0xc002fa8e98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.486: INFO: Pod "webserver-deployment-595b5b9587-pkvtx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pkvtx webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-pkvtx 733fc2cb-db2e-4a71-b4b3-348f41920eae 2004703 0 2020-03-22 23:42:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa8fb7 0xc002fa8fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.22,StartTime:2020-03-22 23:42:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-22 23:42:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3fdbbe524214eff55de820a9567e8a2ac9ea99f1cc9384f6b625114e943a7184,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.486: INFO: Pod "webserver-deployment-595b5b9587-q548p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q548p webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-q548p 783c6599-4793-4277-bf97-907b3bb9dd2b 2004829 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa9137 0xc002fa9138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.487: INFO: Pod "webserver-deployment-595b5b9587-rkk2d" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rkk2d webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-rkk2d 8ff28d89-ed33-4318-8301-b2d89729deb6 2004710 0 2020-03-22 23:42:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa9257 0xc002fa9258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.23,StartTime:2020-03-22 23:42:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-22 23:42:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://722a4dc51c8e62c26539ecf458cb070c244213cf0d283329c3dd316c1dbbec3a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.487: INFO: Pod "webserver-deployment-595b5b9587-rqct2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rqct2 webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-rqct2 bc87fc14-f689-49c3-a20e-9e5d9573513a 2004835 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa93d7 0xc002fa93d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-22 23:42:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.487: INFO: Pod "webserver-deployment-595b5b9587-tfftp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tfftp webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-tfftp 10d0bc7e-4d55-4ee3-bd2b-e6b38f2c68aa 2004808 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa9537 0xc002fa9538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.487: INFO: Pod "webserver-deployment-595b5b9587-txkt9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-txkt9 webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-txkt9 db958578-1f1e-4ae7-b5bd-34854b91d016 2004661 0 2020-03-22 23:42:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa9897 0xc002fa9898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.20,StartTime:2020-03-22 23:42:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-22 23:42:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://52e3d0e5956ee3e1b3e1b3cb9605acc58c5b57d0b9199aa0a96f36e6dd4156d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.487: INFO: Pod "webserver-deployment-595b5b9587-vhgnf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vhgnf webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-vhgnf d8ed7800-5e7f-483a-91c3-4a565993e3e0 2004640 0 2020-03-22 23:42:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa9a17 0xc002fa9a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.156,StartTime:2020-03-22 23:42:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-22 23:42:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6c8a89290ed491956c36c865ca8b127c5a4b63e5c8e2fb5b337f79bdfc6070fe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.156,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.487: INFO: Pod "webserver-deployment-595b5b9587-z9rc2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z9rc2 webserver-deployment-595b5b9587- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-595b5b9587-z9rc2 97dd7522-79f5-49ce-86e7-494abfa19114 2004683 0 2020-03-22 23:42:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89dfeeb6-4b0b-472c-9415-b326b170b4a0 0xc002fa9b97 0xc002fa9b98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.159,StartTime:2020-03-22 23:42:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-22 23:42:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f39e4c105eb62175fb290ccfa3ea092effaf88b9495c044ecd728daebbb11929,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.487: INFO: Pod "webserver-deployment-c7997dcc8-8tb94" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8tb94 webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-8tb94 6b66be28-bc79-4824-9177-6a9fa164f6a6 2004817 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002fa9d37 0xc002fa9d38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.487: INFO: Pod "webserver-deployment-c7997dcc8-dh9kv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dh9kv webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-dh9kv 70618c0e-e7fe-4553-8bbf-b8c6f2568e47 2004836 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002fa9e67 0xc002fa9e68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.487: INFO: Pod "webserver-deployment-c7997dcc8-g9ttl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g9ttl webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-g9ttl 6217ee9c-b857-4b65-b8c9-c94907642371 2004820 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002fa9f97 0xc002fa9f98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.488: INFO: Pod "webserver-deployment-c7997dcc8-gdnfr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gdnfr webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-gdnfr c3ab6239-14f3-4a63-b8a0-df544a5b0de3 2004739 0 2020-03-22 23:42:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002f760c7 0xc002f760c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-22 23:42:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.488: INFO: Pod "webserver-deployment-c7997dcc8-grcdz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-grcdz webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-grcdz 839e1b29-88e3-4e72-9b7d-267aefd02419 2004763 0 2020-03-22 23:42:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002f76257 0xc002f76258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-22 23:42:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.488: INFO: Pod "webserver-deployment-c7997dcc8-jq2xg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jq2xg webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-jq2xg 631f6768-fc0a-441c-a460-420bd55e1fb1 2004749 0 2020-03-22 23:42:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002f763d7 0xc002f763d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-22 23:42:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.488: INFO: Pod "webserver-deployment-c7997dcc8-lvndr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lvndr webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-lvndr 35f7f65c-5914-462b-96b7-a921217ab397 2004822 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002f76557 0xc002f76558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.488: INFO: Pod "webserver-deployment-c7997dcc8-mjwlw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mjwlw webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-mjwlw 9456113d-96c4-4e13-94cf-eea619fb9743 2004819 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002f76687 0xc002f76688}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.488: INFO: Pod "webserver-deployment-c7997dcc8-npcv7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-npcv7 webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-npcv7 2b305f2e-3856-42bf-b849-b1360236dfd4 2004855 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002f767d7 0xc002f767d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-22 23:42:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.488: INFO: Pod "webserver-deployment-c7997dcc8-nqg4k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nqg4k webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-nqg4k 3c448820-fb35-4710-8c8b-98b54ca29718 2004802 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002f76957 0xc002f76958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.489: INFO: Pod "webserver-deployment-c7997dcc8-pdcps" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pdcps webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-pdcps d5cc2b4d-aa06-4a94-a76d-e0f4851040a2 2004803 0 2020-03-22 23:42:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002f76a87 0xc002f76a88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.489: INFO: Pod "webserver-deployment-c7997dcc8-t742k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t742k webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-t742k 23b5e72a-0d53-4314-8290-c26601ffe19b 2004760 0 2020-03-22 23:42:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002f76bb7 0xc002f76bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-22 23:42:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:42:35.489: INFO: Pod "webserver-deployment-c7997dcc8-zbkxc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zbkxc webserver-deployment-c7997dcc8- deployment-3193 /api/v1/namespaces/deployment-3193/pods/webserver-deployment-c7997dcc8-zbkxc bd4e9392-29fa-4195-ac9e-13244216c458 2004765 0 2020-03-22 23:42:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d674b860-3656-4829-9260-bec438f8ed12 0xc002f76d37 0xc002f76d38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v622p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v622p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v622p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-22 23:42:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-22 23:42:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:42:35.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3193" for this suite. • [SLOW TEST:12.788 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":50,"skipped":763,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:42:35.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 22 23:42:35.915: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 22 23:42:48.420: INFO: >>> kubeConfig: /root/.kube/config Mar 22 23:42:51.534: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:43:02.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3027" for this suite. • [SLOW TEST:26.391 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":51,"skipped":768,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:43:02.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-b589248b-2eb4-472c-bca7-a8ca805b3079 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b589248b-2eb4-472c-bca7-a8ca805b3079 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:44:20.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4980" for this suite. • [SLOW TEST:78.523 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":782,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:44:20.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-aef6bf73-6e8e-4b7c-9c82-caec11e272be STEP: Creating a pod to test consume configMaps Mar 22 23:44:20.641: INFO: Waiting up to 5m0s for pod "pod-configmaps-95866529-ecc8-44d5-9351-0a22b8751da7" in namespace "configmap-4442" to be "Succeeded or Failed" Mar 22 23:44:20.645: INFO: Pod "pod-configmaps-95866529-ecc8-44d5-9351-0a22b8751da7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048582ms Mar 22 23:44:22.650: INFO: Pod "pod-configmaps-95866529-ecc8-44d5-9351-0a22b8751da7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008285131s Mar 22 23:44:24.810: INFO: Pod "pod-configmaps-95866529-ecc8-44d5-9351-0a22b8751da7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168599201s STEP: Saw pod success Mar 22 23:44:24.810: INFO: Pod "pod-configmaps-95866529-ecc8-44d5-9351-0a22b8751da7" satisfied condition "Succeeded or Failed" Mar 22 23:44:24.812: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-95866529-ecc8-44d5-9351-0a22b8751da7 container configmap-volume-test: STEP: delete the pod Mar 22 23:44:25.153: INFO: Waiting for pod pod-configmaps-95866529-ecc8-44d5-9351-0a22b8751da7 to disappear Mar 22 23:44:25.158: INFO: Pod pod-configmaps-95866529-ecc8-44d5-9351-0a22b8751da7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:44:25.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4442" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":803,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:44:25.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 22 23:44:25.291: INFO: Waiting up to 5m0s for pod "pod-64ff411c-f262-4612-8f88-21b9d8f94ac0" in namespace "emptydir-5582" to be "Succeeded or Failed" Mar 22 23:44:25.302: INFO: Pod "pod-64ff411c-f262-4612-8f88-21b9d8f94ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.138393ms Mar 22 23:44:27.355: INFO: Pod "pod-64ff411c-f262-4612-8f88-21b9d8f94ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064450462s Mar 22 23:44:29.359: INFO: Pod "pod-64ff411c-f262-4612-8f88-21b9d8f94ac0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068363805s STEP: Saw pod success Mar 22 23:44:29.359: INFO: Pod "pod-64ff411c-f262-4612-8f88-21b9d8f94ac0" satisfied condition "Succeeded or Failed" Mar 22 23:44:29.362: INFO: Trying to get logs from node latest-worker pod pod-64ff411c-f262-4612-8f88-21b9d8f94ac0 container test-container: STEP: delete the pod Mar 22 23:44:29.395: INFO: Waiting for pod pod-64ff411c-f262-4612-8f88-21b9d8f94ac0 to disappear Mar 22 23:44:29.413: INFO: Pod pod-64ff411c-f262-4612-8f88-21b9d8f94ac0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:44:29.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5582" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":806,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:44:29.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-bb653fcc-68ac-439f-b2e5-4fa9f8b69601 STEP: Creating a pod to test consume configMaps Mar 22 23:44:29.520: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec796bf1-d064-4d1c-a23c-dff605de5c1d" in namespace "projected-8055" to be "Succeeded or Failed" Mar 22 23:44:29.524: INFO: Pod "pod-projected-configmaps-ec796bf1-d064-4d1c-a23c-dff605de5c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.990016ms Mar 22 23:44:31.540: INFO: Pod "pod-projected-configmaps-ec796bf1-d064-4d1c-a23c-dff605de5c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019675237s Mar 22 23:44:33.546: INFO: Pod "pod-projected-configmaps-ec796bf1-d064-4d1c-a23c-dff605de5c1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025747363s STEP: Saw pod success Mar 22 23:44:33.546: INFO: Pod "pod-projected-configmaps-ec796bf1-d064-4d1c-a23c-dff605de5c1d" satisfied condition "Succeeded or Failed" Mar 22 23:44:33.548: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-ec796bf1-d064-4d1c-a23c-dff605de5c1d container projected-configmap-volume-test: STEP: delete the pod Mar 22 23:44:33.587: INFO: Waiting for pod pod-projected-configmaps-ec796bf1-d064-4d1c-a23c-dff605de5c1d to disappear Mar 22 23:44:33.598: INFO: Pod pod-projected-configmaps-ec796bf1-d064-4d1c-a23c-dff605de5c1d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:44:33.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8055" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":811,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:44:33.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7587 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7587 STEP: creating replication controller externalsvc in namespace services-7587 I0322 23:44:33.813906 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7587, replica count: 2 I0322 23:44:36.864342 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 23:44:39.864632 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 22 23:44:39.921: INFO: Creating new exec pod Mar 22 23:44:43.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7587 execpod4ddzs -- /bin/sh -x -c nslookup nodeport-service' Mar 22 23:44:44.172: INFO: stderr: "I0322 23:44:44.067916 507 log.go:172] (0xc00003a4d0) (0xc00052a0a0) Create stream\nI0322 23:44:44.067971 507 log.go:172] (0xc00003a4d0) (0xc00052a0a0) Stream added, broadcasting: 1\nI0322 23:44:44.070841 507 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0322 23:44:44.070877 507 log.go:172] (0xc00003a4d0) (0xc0007ec000) Create stream\nI0322 23:44:44.070887 507 log.go:172] (0xc00003a4d0) (0xc0007ec000) Stream added, broadcasting: 3\nI0322 23:44:44.071913 507 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0322 23:44:44.071975 507 log.go:172] (0xc00003a4d0) (0xc0007fa000) Create stream\nI0322 23:44:44.071993 507 log.go:172] (0xc00003a4d0) (0xc0007fa000) Stream added, broadcasting: 5\nI0322 23:44:44.073024 507 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0322 23:44:44.156889 507 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0322 23:44:44.156931 507 log.go:172] (0xc0007fa000) (5) Data frame handling\nI0322 23:44:44.156957 507 log.go:172] (0xc0007fa000) (5) Data frame sent\n+ nslookup nodeport-service\nI0322 23:44:44.165066 507 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0322 23:44:44.165094 507 log.go:172] (0xc0007ec000) (3) Data frame handling\nI0322 23:44:44.165299 507 log.go:172] (0xc0007ec000) (3) Data frame sent\nI0322 23:44:44.166673 507 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0322 23:44:44.166692 507 log.go:172] (0xc0007ec000) (3) Data frame handling\nI0322 23:44:44.166704 507 log.go:172] (0xc0007ec000) (3) Data frame sent\nI0322 23:44:44.167156 507 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0322 23:44:44.167191 507 log.go:172] (0xc0007ec000) (3) Data frame handling\nI0322 23:44:44.167740 507 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0322 23:44:44.167751 507 log.go:172] (0xc0007fa000) (5) Data frame handling\nI0322 23:44:44.168967 507 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0322 23:44:44.168984 507 log.go:172] (0xc00052a0a0) (1) Data frame handling\nI0322 23:44:44.168996 507 log.go:172] (0xc00052a0a0) (1) Data frame sent\nI0322 23:44:44.169010 507 log.go:172] (0xc00003a4d0) (0xc00052a0a0) Stream removed, broadcasting: 1\nI0322 23:44:44.169429 507 log.go:172] (0xc00003a4d0) (0xc00052a0a0) Stream removed, broadcasting: 1\nI0322 23:44:44.169449 507 log.go:172] (0xc00003a4d0) (0xc0007ec000) Stream removed, broadcasting: 3\nI0322 23:44:44.169457 507 log.go:172] (0xc00003a4d0) (0xc0007fa000) Stream removed, broadcasting: 5\n" Mar 22 23:44:44.172: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7587.svc.cluster.local\tcanonical name = externalsvc.services-7587.svc.cluster.local.\nName:\texternalsvc.services-7587.svc.cluster.local\nAddress: 10.96.219.33\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7587, will wait for the garbage collector to delete the pods Mar 22 23:44:44.232: INFO: Deleting ReplicationController externalsvc took: 6.212866ms Mar 22 23:44:44.532: INFO: Terminating ReplicationController externalsvc pods took: 300.252147ms Mar 22 23:44:53.060: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:44:53.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7587" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:19.515 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":56,"skipped":843,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:44:53.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-319cf9f4-70b1-48ad-a331-d45864f96389 in namespace container-probe-3475 Mar 22 23:44:57.195: INFO: Started pod test-webserver-319cf9f4-70b1-48ad-a331-d45864f96389 in namespace container-probe-3475 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 23:44:57.199: INFO: Initial restart count of pod test-webserver-319cf9f4-70b1-48ad-a331-d45864f96389 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:48:57.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3475" for this suite. • [SLOW TEST:244.916 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":846,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:48:58.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 22 23:48:58.271: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 22 23:48:58.282: INFO: Waiting for terminating namespaces to be deleted... Mar 22 23:48:58.285: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 22 23:48:58.302: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 22 23:48:58.302: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 23:48:58.302: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 22 23:48:58.302: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 23:48:58.302: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 22 23:48:58.320: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 22 23:48:58.320: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 23:48:58.320: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 22 23:48:58.320: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-832a83d4-0fb7-44d6-8d89-95c01fd87962 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-832a83d4-0fb7-44d6-8d89-95c01fd87962 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-832a83d4-0fb7-44d6-8d89-95c01fd87962 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:54:06.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8818" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.513 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":58,"skipped":879,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:54:06.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-23a3aa7b-c9c0-46fd-91ea-b6b0a277ce7f in namespace container-probe-4311 Mar 22 23:54:10.613: INFO: Started pod busybox-23a3aa7b-c9c0-46fd-91ea-b6b0a277ce7f in namespace container-probe-4311 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 23:54:10.616: INFO: Initial restart count of pod busybox-23a3aa7b-c9c0-46fd-91ea-b6b0a277ce7f is 0 Mar 22 23:55:00.734: INFO: Restart count of pod container-probe-4311/busybox-23a3aa7b-c9c0-46fd-91ea-b6b0a277ce7f is now 1 (50.11833999s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:55:00.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4311" for this suite. • [SLOW TEST:54.215 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":887,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:55:00.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 22 23:55:00.837: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edcfb8c0-4b6e-48e8-ac3f-c497be37132f" in namespace "projected-3724" to be "Succeeded or Failed" Mar 22 23:55:00.855: INFO: Pod "downwardapi-volume-edcfb8c0-4b6e-48e8-ac3f-c497be37132f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.513034ms Mar 22 23:55:02.860: INFO: Pod "downwardapi-volume-edcfb8c0-4b6e-48e8-ac3f-c497be37132f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022987331s Mar 22 23:55:05.258: INFO: Pod "downwardapi-volume-edcfb8c0-4b6e-48e8-ac3f-c497be37132f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.420808209s STEP: Saw pod success Mar 22 23:55:05.258: INFO: Pod "downwardapi-volume-edcfb8c0-4b6e-48e8-ac3f-c497be37132f" satisfied condition "Succeeded or Failed" Mar 22 23:55:05.262: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-edcfb8c0-4b6e-48e8-ac3f-c497be37132f container client-container: STEP: delete the pod Mar 22 23:55:05.468: INFO: Waiting for pod downwardapi-volume-edcfb8c0-4b6e-48e8-ac3f-c497be37132f to disappear Mar 22 23:55:05.485: INFO: Pod downwardapi-volume-edcfb8c0-4b6e-48e8-ac3f-c497be37132f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:55:05.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3724" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":903,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:55:05.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1338 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 22 23:55:05.578: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 22 23:55:05.622: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 23:55:07.748: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 23:55:09.626: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 23:55:11.651: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:55:13.626: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:55:15.633: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:55:17.625: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:55:19.629: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:55:21.646: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:55:23.627: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:55:25.627: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 23:55:27.626: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 22 23:55:27.633: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 22 23:55:31.656: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.181:8080/dial?request=hostname&protocol=http&host=10.244.2.180&port=8080&tries=1'] Namespace:pod-network-test-1338 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:55:31.656: INFO: >>> kubeConfig: /root/.kube/config I0322 23:55:31.694932 7 log.go:172] (0xc002467550) (0xc0002ed220) Create stream I0322 23:55:31.694983 7 log.go:172] (0xc002467550) (0xc0002ed220) Stream added, broadcasting: 1 I0322 23:55:31.698253 7 log.go:172] (0xc002467550) Reply frame received for 1 I0322 23:55:31.698291 7 log.go:172] (0xc002467550) (0xc0002ed4a0) Create stream I0322 23:55:31.698305 7 log.go:172] (0xc002467550) (0xc0002ed4a0) Stream added, broadcasting: 3 I0322 23:55:31.699265 7 log.go:172] (0xc002467550) Reply frame received for 3 I0322 23:55:31.699299 7 log.go:172] (0xc002467550) (0xc000bd2500) Create stream I0322 23:55:31.699314 7 log.go:172] (0xc002467550) (0xc000bd2500) Stream added, broadcasting: 5 I0322 23:55:31.700250 7 log.go:172] (0xc002467550) Reply frame received for 5 I0322 23:55:31.792157 7 log.go:172] (0xc002467550) Data frame received for 3 I0322 23:55:31.792187 7 log.go:172] (0xc0002ed4a0) (3) Data frame handling I0322 23:55:31.792207 7 log.go:172] (0xc0002ed4a0) (3) Data frame sent I0322 23:55:31.792865 7 log.go:172] (0xc002467550) Data frame received for 5 I0322 23:55:31.792906 7 log.go:172] (0xc000bd2500) (5) Data frame handling I0322 23:55:31.792950 7 log.go:172] (0xc002467550) Data frame received for 3 I0322 23:55:31.792976 7 log.go:172] (0xc0002ed4a0) (3) Data frame handling I0322 23:55:31.794979 7 log.go:172] (0xc002467550) Data frame received for 1 I0322 23:55:31.795016 7 log.go:172] (0xc0002ed220) (1) Data frame handling I0322 23:55:31.795040 7 log.go:172] (0xc0002ed220) (1) Data frame sent I0322 23:55:31.795111 7 log.go:172] (0xc002467550) (0xc0002ed220) Stream removed, broadcasting: 1 I0322 23:55:31.795154 7 log.go:172] (0xc002467550) Go away received I0322 23:55:31.795276 7 log.go:172] (0xc002467550) (0xc0002ed220) Stream removed, broadcasting: 1 I0322 23:55:31.795318 7 log.go:172] (0xc002467550) (0xc0002ed4a0) Stream removed, broadcasting: 3 I0322 23:55:31.795342 7 log.go:172] (0xc002467550) (0xc000bd2500) Stream removed, broadcasting: 5 Mar 22 23:55:31.795: INFO: Waiting for responses: map[] Mar 22 23:55:31.799: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.181:8080/dial?request=hostname&protocol=http&host=10.244.1.43&port=8080&tries=1'] Namespace:pod-network-test-1338 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:55:31.799: INFO: >>> kubeConfig: /root/.kube/config I0322 23:55:31.833053 7 log.go:172] (0xc002694000) (0xc0011f66e0) Create stream I0322 23:55:31.833081 7 log.go:172] (0xc002694000) (0xc0011f66e0) Stream added, broadcasting: 1 I0322 23:55:31.836129 7 log.go:172] (0xc002694000) Reply frame received for 1 I0322 23:55:31.836191 7 log.go:172] (0xc002694000) (0xc0011f6780) Create stream I0322 23:55:31.836225 7 log.go:172] (0xc002694000) (0xc0011f6780) Stream added, broadcasting: 3 I0322 23:55:31.837673 7 log.go:172] (0xc002694000) Reply frame received for 3 I0322 23:55:31.837711 7 log.go:172] (0xc002694000) (0xc0002770e0) Create stream I0322 23:55:31.837739 7 log.go:172] (0xc002694000) (0xc0002770e0) Stream added, broadcasting: 5 I0322 23:55:31.838664 7 log.go:172] (0xc002694000) Reply frame received for 5 I0322 23:55:31.904881 7 log.go:172] (0xc002694000) Data frame received for 3 I0322 23:55:31.904904 7 log.go:172] (0xc0011f6780) (3) Data frame handling I0322 23:55:31.904918 7 log.go:172] (0xc0011f6780) (3) Data frame sent I0322 23:55:31.905767 7 log.go:172] (0xc002694000) Data frame received for 5 I0322 23:55:31.905818 7 log.go:172] (0xc0002770e0) (5) Data frame handling I0322 23:55:31.905859 7 log.go:172] (0xc002694000) Data frame received for 3 I0322 23:55:31.905889 7 log.go:172] (0xc0011f6780) (3) Data frame handling I0322 23:55:31.907568 7 log.go:172] (0xc002694000) Data frame received for 1 I0322 23:55:31.907676 7 log.go:172] (0xc0011f66e0) (1) Data frame handling I0322 23:55:31.907749 7 log.go:172] (0xc0011f66e0) (1) Data frame sent I0322 23:55:31.907783 7 log.go:172] (0xc002694000) (0xc0011f66e0) Stream removed, broadcasting: 1 I0322 23:55:31.907813 7 log.go:172] (0xc002694000) Go away received I0322 23:55:31.907937 7 log.go:172] (0xc002694000) (0xc0011f66e0) Stream removed, broadcasting: 1 I0322 23:55:31.907957 7 log.go:172] (0xc002694000) (0xc0011f6780) Stream removed, broadcasting: 3 I0322 23:55:31.907968 7 log.go:172] (0xc002694000) (0xc0002770e0) Stream removed, broadcasting: 5 Mar 22 23:55:31.908: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:55:31.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1338" for this suite. • [SLOW TEST:26.435 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":940,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:55:31.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 23:55:32.587: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 23:55:34.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518132, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518132, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518132, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518132, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 23:55:36.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518132, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518132, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518132, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518132, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 23:55:39.631: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:55:39.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5184" for this suite. STEP: Destroying namespace "webhook-5184-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.846 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":62,"skipped":970,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:55:39.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:55:52.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4913" for this suite. • [SLOW TEST:13.174 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":63,"skipped":982,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:55:52.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 22 23:56:01.083: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 23:56:01.090: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 23:56:03.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 23:56:03.095: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 23:56:05.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 23:56:05.095: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 23:56:07.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 23:56:07.095: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 23:56:09.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 23:56:09.095: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 23:56:11.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 23:56:11.095: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 23:56:13.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 23:56:13.095: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:56:13.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5982" for this suite. • [SLOW TEST:20.150 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":985,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:56:13.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:56:13.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5151" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":65,"skipped":985,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:56:13.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 22 23:56:13.259: INFO: Created pod &Pod{ObjectMeta:{dns-4258 dns-4258 /api/v1/namespaces/dns-4258/pods/dns-4258 a5e27132-afd2-4d01-ab92-6805ecee58bd 2007976 0 2020-03-22 23:56:13 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnktg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnktg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnktg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 23:56:13.264: INFO: The status of Pod dns-4258 is Pending, waiting for it to be Running (with Ready = true) Mar 22 23:56:15.267: INFO: The status of Pod dns-4258 is Pending, waiting for it to be Running (with Ready = true) Mar 22 23:56:17.269: INFO: The status of Pod dns-4258 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 22 23:56:17.269: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4258 PodName:dns-4258 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:56:17.269: INFO: >>> kubeConfig: /root/.kube/config I0322 23:56:17.302229 7 log.go:172] (0xc002694b00) (0xc001994e60) Create stream I0322 23:56:17.302261 7 log.go:172] (0xc002694b00) (0xc001994e60) Stream added, broadcasting: 1 I0322 23:56:17.308635 7 log.go:172] (0xc002694b00) Reply frame received for 1 I0322 23:56:17.308714 7 log.go:172] (0xc002694b00) (0xc001994fa0) Create stream I0322 23:56:17.308761 7 log.go:172] (0xc002694b00) (0xc001994fa0) Stream added, broadcasting: 3 I0322 23:56:17.310551 7 log.go:172] (0xc002694b00) Reply frame received for 3 I0322 23:56:17.310571 7 log.go:172] (0xc002694b00) (0xc001995040) Create stream I0322 23:56:17.310580 7 log.go:172] (0xc002694b00) (0xc001995040) Stream added, broadcasting: 5 I0322 23:56:17.311414 7 log.go:172] (0xc002694b00) Reply frame received for 5 I0322 23:56:17.418019 7 log.go:172] (0xc002694b00) Data frame received for 3 I0322 23:56:17.418044 7 log.go:172] (0xc001994fa0) (3) Data frame handling I0322 23:56:17.418060 7 log.go:172] (0xc001994fa0) (3) Data frame sent I0322 23:56:17.418659 7 log.go:172] (0xc002694b00) Data frame received for 3 I0322 23:56:17.418693 7 log.go:172] (0xc001994fa0) (3) Data frame handling I0322 23:56:17.418722 7 log.go:172] (0xc002694b00) Data frame received for 5 I0322 23:56:17.418738 7 log.go:172] (0xc001995040) (5) Data frame handling I0322 23:56:17.420325 7 log.go:172] (0xc002694b00) Data frame received for 1 I0322 23:56:17.420343 7 log.go:172] (0xc001994e60) (1) Data frame handling I0322 23:56:17.420354 7 log.go:172] (0xc001994e60) (1) Data frame sent I0322 23:56:17.420375 7 log.go:172] (0xc002694b00) (0xc001994e60) Stream removed, broadcasting: 1 I0322 23:56:17.420440 7 log.go:172] (0xc002694b00) (0xc001994e60) Stream removed, broadcasting: 1 I0322 23:56:17.420455 7 log.go:172] (0xc002694b00) (0xc001994fa0) Stream removed, broadcasting: 3 I0322 23:56:17.420581 7 log.go:172] (0xc002694b00) (0xc001995040) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 22 23:56:17.420: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4258 PodName:dns-4258 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 23:56:17.420: INFO: >>> kubeConfig: /root/.kube/config I0322 23:56:17.422516 7 log.go:172] (0xc002694b00) Go away received I0322 23:56:17.447550 7 log.go:172] (0xc002d3e370) (0xc001227e00) Create stream I0322 23:56:17.447580 7 log.go:172] (0xc002d3e370) (0xc001227e00) Stream added, broadcasting: 1 I0322 23:56:17.449677 7 log.go:172] (0xc002d3e370) Reply frame received for 1 I0322 23:56:17.449711 7 log.go:172] (0xc002d3e370) (0xc00117a140) Create stream I0322 23:56:17.449727 7 log.go:172] (0xc002d3e370) (0xc00117a140) Stream added, broadcasting: 3 I0322 23:56:17.450530 7 log.go:172] (0xc002d3e370) Reply frame received for 3 I0322 23:56:17.450590 7 log.go:172] (0xc002d3e370) (0xc001ad6280) Create stream I0322 23:56:17.450615 7 log.go:172] (0xc002d3e370) (0xc001ad6280) Stream added, broadcasting: 5 I0322 23:56:17.451391 7 log.go:172] (0xc002d3e370) Reply frame received for 5 I0322 23:56:17.512855 7 log.go:172] (0xc002d3e370) Data frame received for 3 I0322 23:56:17.512886 7 log.go:172] (0xc00117a140) (3) Data frame handling I0322 23:56:17.512904 7 log.go:172] (0xc00117a140) (3) Data frame sent I0322 23:56:17.513929 7 log.go:172] (0xc002d3e370) Data frame received for 5 I0322 23:56:17.513947 7 log.go:172] (0xc001ad6280) (5) Data frame handling I0322 23:56:17.513984 7 log.go:172] (0xc002d3e370) Data frame received for 3 I0322 23:56:17.514014 7 log.go:172] (0xc00117a140) (3) Data frame handling I0322 23:56:17.515769 7 log.go:172] (0xc002d3e370) Data frame received for 1 I0322 23:56:17.515807 7 log.go:172] (0xc001227e00) (1) Data frame handling I0322 23:56:17.515829 7 log.go:172] (0xc001227e00) (1) Data frame sent I0322 23:56:17.515855 7 log.go:172] (0xc002d3e370) (0xc001227e00) Stream removed, broadcasting: 1 I0322 23:56:17.515900 7 log.go:172] (0xc002d3e370) Go away received I0322 23:56:17.515993 7 log.go:172] (0xc002d3e370) (0xc001227e00) Stream removed, broadcasting: 1 I0322 23:56:17.516028 7 log.go:172] (0xc002d3e370) (0xc00117a140) Stream removed, broadcasting: 3 I0322 23:56:17.516048 7 log.go:172] (0xc002d3e370) (0xc001ad6280) Stream removed, broadcasting: 5 Mar 22 23:56:17.516: INFO: Deleting pod dns-4258... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:56:17.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4258" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":66,"skipped":1018,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:56:17.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-cfd6e2c2-d0b9-4859-a907-091a315494c0 STEP: Creating a pod to test consume configMaps Mar 22 23:56:17.869: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b6ff673-8e2c-4965-88bb-30ca5f13f980" in namespace "projected-8410" to be "Succeeded or Failed" Mar 22 23:56:17.906: INFO: Pod "pod-projected-configmaps-7b6ff673-8e2c-4965-88bb-30ca5f13f980": Phase="Pending", Reason="", readiness=false. Elapsed: 36.089907ms Mar 22 23:56:19.910: INFO: Pod "pod-projected-configmaps-7b6ff673-8e2c-4965-88bb-30ca5f13f980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040266949s Mar 22 23:56:21.914: INFO: Pod "pod-projected-configmaps-7b6ff673-8e2c-4965-88bb-30ca5f13f980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044460884s STEP: Saw pod success Mar 22 23:56:21.914: INFO: Pod "pod-projected-configmaps-7b6ff673-8e2c-4965-88bb-30ca5f13f980" satisfied condition "Succeeded or Failed" Mar 22 23:56:21.917: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-7b6ff673-8e2c-4965-88bb-30ca5f13f980 container projected-configmap-volume-test: STEP: delete the pod Mar 22 23:56:21.938: INFO: Waiting for pod pod-projected-configmaps-7b6ff673-8e2c-4965-88bb-30ca5f13f980 to disappear Mar 22 23:56:21.942: INFO: Pod pod-projected-configmaps-7b6ff673-8e2c-4965-88bb-30ca5f13f980 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:56:21.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8410" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1036,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:56:21.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-cb838367-7f70-4d3f-9bb1-2920b5247be9 STEP: Creating a pod to test consume secrets Mar 22 23:56:22.088: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-53c9b6ac-dfcd-4ecb-a18d-f7d8ae80415b" in namespace "projected-2809" to be "Succeeded or Failed" Mar 22 23:56:22.108: INFO: Pod "pod-projected-secrets-53c9b6ac-dfcd-4ecb-a18d-f7d8ae80415b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.199245ms Mar 22 23:56:24.113: INFO: Pod "pod-projected-secrets-53c9b6ac-dfcd-4ecb-a18d-f7d8ae80415b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025079641s Mar 22 23:56:26.117: INFO: Pod "pod-projected-secrets-53c9b6ac-dfcd-4ecb-a18d-f7d8ae80415b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029688892s STEP: Saw pod success Mar 22 23:56:26.118: INFO: Pod "pod-projected-secrets-53c9b6ac-dfcd-4ecb-a18d-f7d8ae80415b" satisfied condition "Succeeded or Failed" Mar 22 23:56:26.121: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-53c9b6ac-dfcd-4ecb-a18d-f7d8ae80415b container projected-secret-volume-test: STEP: delete the pod Mar 22 23:56:26.157: INFO: Waiting for pod pod-projected-secrets-53c9b6ac-dfcd-4ecb-a18d-f7d8ae80415b to disappear Mar 22 23:56:26.185: INFO: Pod pod-projected-secrets-53c9b6ac-dfcd-4ecb-a18d-f7d8ae80415b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:56:26.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2809" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1042,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:56:26.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-4366 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4366 STEP: Deleting pre-stop pod Mar 22 23:56:39.286: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:56:39.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4366" for this suite. • [SLOW TEST:13.109 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":69,"skipped":1055,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:56:39.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0322 23:56:49.399555 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 22 23:56:49.399: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:56:49.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1876" for this suite. • [SLOW TEST:10.105 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":70,"skipped":1060,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:56:49.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 22 23:56:49.496: INFO: Waiting up to 5m0s for pod "pod-8e787f47-67f5-4c18-b004-14d5d0b79b1c" in namespace "emptydir-1812" to be "Succeeded or Failed" Mar 22 23:56:49.499: INFO: Pod "pod-8e787f47-67f5-4c18-b004-14d5d0b79b1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.784857ms Mar 22 23:56:51.504: INFO: Pod "pod-8e787f47-67f5-4c18-b004-14d5d0b79b1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007307303s Mar 22 23:56:53.508: INFO: Pod "pod-8e787f47-67f5-4c18-b004-14d5d0b79b1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011434871s STEP: Saw pod success Mar 22 23:56:53.508: INFO: Pod "pod-8e787f47-67f5-4c18-b004-14d5d0b79b1c" satisfied condition "Succeeded or Failed" Mar 22 23:56:53.511: INFO: Trying to get logs from node latest-worker pod pod-8e787f47-67f5-4c18-b004-14d5d0b79b1c container test-container: STEP: delete the pod Mar 22 23:56:53.531: INFO: Waiting for pod pod-8e787f47-67f5-4c18-b004-14d5d0b79b1c to disappear Mar 22 23:56:53.535: INFO: Pod pod-8e787f47-67f5-4c18-b004-14d5d0b79b1c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:56:53.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1812" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1069,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:56:53.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-36544d7f-c2a4-4c05-95bc-877db3e9f2bc in namespace container-probe-8710 Mar 22 23:56:57.623: INFO: Started pod liveness-36544d7f-c2a4-4c05-95bc-877db3e9f2bc in namespace container-probe-8710 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 23:56:57.627: INFO: Initial restart count of pod liveness-36544d7f-c2a4-4c05-95bc-877db3e9f2bc is 0 Mar 22 23:57:15.666: INFO: Restart count of pod container-probe-8710/liveness-36544d7f-c2a4-4c05-95bc-877db3e9f2bc is now 1 (18.039004536s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:57:15.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8710" for this suite. • [SLOW TEST:22.150 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1073,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:57:15.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:57:15.771: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d9832969-1f9b-419d-9d2f-dbe35e14e9af" in namespace "security-context-test-9134" to be "Succeeded or Failed" Mar 22 23:57:15.787: INFO: Pod "busybox-user-65534-d9832969-1f9b-419d-9d2f-dbe35e14e9af": Phase="Pending", Reason="", readiness=false. Elapsed: 15.987444ms Mar 22 23:57:17.791: INFO: Pod "busybox-user-65534-d9832969-1f9b-419d-9d2f-dbe35e14e9af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020134569s Mar 22 23:57:19.796: INFO: Pod "busybox-user-65534-d9832969-1f9b-419d-9d2f-dbe35e14e9af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024391704s Mar 22 23:57:19.796: INFO: Pod "busybox-user-65534-d9832969-1f9b-419d-9d2f-dbe35e14e9af" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:57:19.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9134" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1077,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:57:19.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:57:19.873: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 22 23:57:21.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7519 create -f -' Mar 22 23:57:25.047: INFO: stderr: "" Mar 22 23:57:25.047: INFO: stdout: "e2e-test-crd-publish-openapi-3816-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 22 23:57:25.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7519 delete e2e-test-crd-publish-openapi-3816-crds test-cr' Mar 22 23:57:25.170: INFO: stderr: "" Mar 22 23:57:25.170: INFO: stdout: "e2e-test-crd-publish-openapi-3816-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 22 23:57:25.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7519 apply -f -' Mar 22 23:57:25.442: INFO: stderr: "" Mar 22 23:57:25.442: INFO: stdout: "e2e-test-crd-publish-openapi-3816-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 22 23:57:25.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7519 delete e2e-test-crd-publish-openapi-3816-crds test-cr' Mar 22 23:57:25.539: INFO: stderr: "" Mar 22 23:57:25.539: INFO: stdout: "e2e-test-crd-publish-openapi-3816-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 22 23:57:25.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3816-crds' Mar 22 23:57:25.780: INFO: stderr: "" Mar 22 23:57:25.780: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3816-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:57:28.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7519" for this suite. • [SLOW TEST:8.881 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":74,"skipped":1083,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:57:28.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:57:44.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2305" for this suite. • [SLOW TEST:16.287 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":75,"skipped":1116,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:57:44.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-49427238-5f6c-4a90-a95a-e1c5c5df02d5 STEP: Creating a pod to test consume secrets Mar 22 23:57:45.053: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d6728e12-348a-4c6c-b353-471ad6b6ec71" in namespace "projected-3952" to be "Succeeded or Failed" Mar 22 23:57:45.081: INFO: Pod "pod-projected-secrets-d6728e12-348a-4c6c-b353-471ad6b6ec71": Phase="Pending", Reason="", readiness=false. Elapsed: 27.048828ms Mar 22 23:57:47.084: INFO: Pod "pod-projected-secrets-d6728e12-348a-4c6c-b353-471ad6b6ec71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030788555s Mar 22 23:57:49.108: INFO: Pod "pod-projected-secrets-d6728e12-348a-4c6c-b353-471ad6b6ec71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05479476s STEP: Saw pod success Mar 22 23:57:49.108: INFO: Pod "pod-projected-secrets-d6728e12-348a-4c6c-b353-471ad6b6ec71" satisfied condition "Succeeded or Failed" Mar 22 23:57:49.111: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d6728e12-348a-4c6c-b353-471ad6b6ec71 container projected-secret-volume-test: STEP: delete the pod Mar 22 23:57:49.132: INFO: Waiting for pod pod-projected-secrets-d6728e12-348a-4c6c-b353-471ad6b6ec71 to disappear Mar 22 23:57:49.135: INFO: Pod pod-projected-secrets-d6728e12-348a-4c6c-b353-471ad6b6ec71 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:57:49.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3952" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1121,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:57:49.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Mar 22 23:57:49.230: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9153" to be "Succeeded or Failed" Mar 22 23:57:49.249: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 19.372889ms Mar 22 23:57:51.253: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023023852s Mar 22 23:57:53.257: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026985945s Mar 22 23:57:55.283: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052796698s STEP: Saw pod success Mar 22 23:57:55.283: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 22 23:57:55.285: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 22 23:57:55.324: INFO: Waiting for pod pod-host-path-test to disappear Mar 22 23:57:55.349: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:57:55.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9153" for this suite. • [SLOW TEST:6.215 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1132,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:57:55.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:58:06.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8268" for this suite. • [SLOW TEST:11.175 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":78,"skipped":1143,"failed":0} SSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:58:06.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:58:06.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6448" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":79,"skipped":1148,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:58:06.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 23:58:07.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 23:58:09.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518287, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518287, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518287, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518287, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 23:58:12.610: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:58:12.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7494-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:58:13.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2768" for this suite. STEP: Destroying namespace "webhook-2768-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.287 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":80,"skipped":1151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:58:13.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 22 23:58:14.018: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 22 23:58:23.088: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:58:23.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3052" for this suite. • [SLOW TEST:9.171 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1196,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:58:23.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-b7a6d9f9-21a8-496b-b9ba-11dc77a0d929 STEP: Creating a pod to test consume secrets Mar 22 23:58:23.181: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f04506ed-eabb-4ca9-8337-006d59542d5d" in namespace "projected-1276" to be "Succeeded or Failed" Mar 22 23:58:23.184: INFO: Pod "pod-projected-secrets-f04506ed-eabb-4ca9-8337-006d59542d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.874646ms Mar 22 23:58:25.211: INFO: Pod "pod-projected-secrets-f04506ed-eabb-4ca9-8337-006d59542d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030289717s Mar 22 23:58:27.215: INFO: Pod "pod-projected-secrets-f04506ed-eabb-4ca9-8337-006d59542d5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034274809s STEP: Saw pod success Mar 22 23:58:27.215: INFO: Pod "pod-projected-secrets-f04506ed-eabb-4ca9-8337-006d59542d5d" satisfied condition "Succeeded or Failed" Mar 22 23:58:27.217: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-f04506ed-eabb-4ca9-8337-006d59542d5d container projected-secret-volume-test: STEP: delete the pod Mar 22 23:58:27.246: INFO: Waiting for pod pod-projected-secrets-f04506ed-eabb-4ca9-8337-006d59542d5d to disappear Mar 22 23:58:27.256: INFO: Pod pod-projected-secrets-f04506ed-eabb-4ca9-8337-006d59542d5d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:58:27.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1276" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1196,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:58:27.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 22 23:58:27.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8d62210-a0a0-471e-8c9d-475d7bcb0fa4" in namespace "downward-api-9425" to be "Succeeded or Failed" Mar 22 23:58:27.371: INFO: Pod "downwardapi-volume-e8d62210-a0a0-471e-8c9d-475d7bcb0fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.37571ms Mar 22 23:58:29.391: INFO: Pod "downwardapi-volume-e8d62210-a0a0-471e-8c9d-475d7bcb0fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0287145s Mar 22 23:58:31.395: INFO: Pod "downwardapi-volume-e8d62210-a0a0-471e-8c9d-475d7bcb0fa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033133226s STEP: Saw pod success Mar 22 23:58:31.395: INFO: Pod "downwardapi-volume-e8d62210-a0a0-471e-8c9d-475d7bcb0fa4" satisfied condition "Succeeded or Failed" Mar 22 23:58:31.399: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e8d62210-a0a0-471e-8c9d-475d7bcb0fa4 container client-container: STEP: delete the pod Mar 22 23:58:31.437: INFO: Waiting for pod downwardapi-volume-e8d62210-a0a0-471e-8c9d-475d7bcb0fa4 to disappear Mar 22 23:58:31.474: INFO: Pod downwardapi-volume-e8d62210-a0a0-471e-8c9d-475d7bcb0fa4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:58:31.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9425" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:58:31.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 22 23:58:31.993: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 22 23:58:34.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518312, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518312, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518312, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518311, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 23:58:37.129: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:58:37.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:58:38.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4359" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.928 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":84,"skipped":1229,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:58:38.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-efd9e11f-6dc2-43e9-9e4c-72f4f3f89458 STEP: Creating a pod to test consume configMaps Mar 22 23:58:38.523: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4976d68e-2d58-4f88-ba52-920ba81953fc" in namespace "projected-4770" to be "Succeeded or Failed" Mar 22 23:58:38.527: INFO: Pod "pod-projected-configmaps-4976d68e-2d58-4f88-ba52-920ba81953fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.411418ms Mar 22 23:58:40.531: INFO: Pod "pod-projected-configmaps-4976d68e-2d58-4f88-ba52-920ba81953fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007663226s Mar 22 23:58:42.535: INFO: Pod "pod-projected-configmaps-4976d68e-2d58-4f88-ba52-920ba81953fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011765723s STEP: Saw pod success Mar 22 23:58:42.535: INFO: Pod "pod-projected-configmaps-4976d68e-2d58-4f88-ba52-920ba81953fc" satisfied condition "Succeeded or Failed" Mar 22 23:58:42.539: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-4976d68e-2d58-4f88-ba52-920ba81953fc container projected-configmap-volume-test: STEP: delete the pod Mar 22 23:58:42.558: INFO: Waiting for pod pod-projected-configmaps-4976d68e-2d58-4f88-ba52-920ba81953fc to disappear Mar 22 23:58:42.574: INFO: Pod pod-projected-configmaps-4976d68e-2d58-4f88-ba52-920ba81953fc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:58:42.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4770" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1238,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:58:42.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 22 23:58:42.647: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 22 23:58:45.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2332 create -f -' Mar 22 23:58:48.577: INFO: stderr: "" Mar 22 23:58:48.577: INFO: stdout: "e2e-test-crd-publish-openapi-5374-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 22 23:58:48.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2332 delete e2e-test-crd-publish-openapi-5374-crds test-cr' Mar 22 23:58:48.680: INFO: stderr: "" Mar 22 23:58:48.680: INFO: stdout: "e2e-test-crd-publish-openapi-5374-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 22 23:58:48.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2332 apply -f -' Mar 22 23:58:48.924: INFO: stderr: "" Mar 22 23:58:48.924: INFO: stdout: "e2e-test-crd-publish-openapi-5374-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 22 23:58:48.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2332 delete e2e-test-crd-publish-openapi-5374-crds test-cr' Mar 22 23:58:49.021: INFO: stderr: "" Mar 22 23:58:49.021: INFO: stdout: "e2e-test-crd-publish-openapi-5374-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 22 23:58:49.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5374-crds' Mar 22 23:58:49.277: INFO: stderr: "" Mar 22 23:58:49.277: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5374-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:58:52.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2332" for this suite. • [SLOW TEST:9.614 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":86,"skipped":1247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:58:52.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-zklq STEP: Creating a pod to test atomic-volume-subpath Mar 22 23:58:52.366: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zklq" in namespace "subpath-1444" to be "Succeeded or Failed" Mar 22 23:58:52.370: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.919472ms Mar 22 23:58:54.373: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007227364s Mar 22 23:58:56.383: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Running", Reason="", readiness=true. Elapsed: 4.016619489s Mar 22 23:58:58.386: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Running", Reason="", readiness=true. Elapsed: 6.019923416s Mar 22 23:59:00.389: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Running", Reason="", readiness=true. Elapsed: 8.023271055s Mar 22 23:59:02.395: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Running", Reason="", readiness=true. Elapsed: 10.029260247s Mar 22 23:59:04.400: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Running", Reason="", readiness=true. Elapsed: 12.03366824s Mar 22 23:59:06.404: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Running", Reason="", readiness=true. Elapsed: 14.037586029s Mar 22 23:59:08.407: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Running", Reason="", readiness=true. Elapsed: 16.04116621s Mar 22 23:59:10.412: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Running", Reason="", readiness=true. Elapsed: 18.045405177s Mar 22 23:59:12.419: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Running", Reason="", readiness=true. Elapsed: 20.052601495s Mar 22 23:59:14.423: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Running", Reason="", readiness=true. Elapsed: 22.056752852s Mar 22 23:59:16.427: INFO: Pod "pod-subpath-test-downwardapi-zklq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061083215s STEP: Saw pod success Mar 22 23:59:16.427: INFO: Pod "pod-subpath-test-downwardapi-zklq" satisfied condition "Succeeded or Failed" Mar 22 23:59:16.430: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-zklq container test-container-subpath-downwardapi-zklq: STEP: delete the pod Mar 22 23:59:16.507: INFO: Waiting for pod pod-subpath-test-downwardapi-zklq to disappear Mar 22 23:59:16.521: INFO: Pod pod-subpath-test-downwardapi-zklq no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-zklq Mar 22 23:59:16.521: INFO: Deleting pod "pod-subpath-test-downwardapi-zklq" in namespace "subpath-1444" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 22 23:59:16.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1444" for this suite. • [SLOW TEST:24.317 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":87,"skipped":1275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 22 23:59:16.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-ff55a085-2ab1-4c00-8ec9-0626118087b6 STEP: Creating secret with name s-test-opt-upd-9fdf8c79-80d2-4fd5-9e22-85f98aa6fee2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ff55a085-2ab1-4c00-8ec9-0626118087b6 STEP: Updating secret s-test-opt-upd-9fdf8c79-80d2-4fd5-9e22-85f98aa6fee2 STEP: Creating secret with name s-test-opt-create-4a25eb2a-7261-4cf7-8e74-6bba23587c3d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:00:35.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5837" for this suite. • [SLOW TEST:78.612 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1300,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:00:35.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 23 00:00:35.208: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-a 620d6a2f-37d4-4193-9f1d-30b585abc6ef 2009487 0 2020-03-23 00:00:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:00:35.208: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-a 620d6a2f-37d4-4193-9f1d-30b585abc6ef 2009487 0 2020-03-23 00:00:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 23 00:00:45.216: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-a 620d6a2f-37d4-4193-9f1d-30b585abc6ef 2009534 0 2020-03-23 00:00:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:00:45.216: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-a 620d6a2f-37d4-4193-9f1d-30b585abc6ef 2009534 0 2020-03-23 00:00:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 23 00:00:55.225: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-a 620d6a2f-37d4-4193-9f1d-30b585abc6ef 2009566 0 2020-03-23 00:00:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:00:55.225: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-a 620d6a2f-37d4-4193-9f1d-30b585abc6ef 2009566 0 2020-03-23 00:00:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 23 00:01:05.242: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-a 620d6a2f-37d4-4193-9f1d-30b585abc6ef 2009598 0 2020-03-23 00:00:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:01:05.242: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-a 620d6a2f-37d4-4193-9f1d-30b585abc6ef 2009598 0 2020-03-23 00:00:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 23 00:01:15.250: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-b 2b6a0108-a313-48ea-80e8-ef0281146d33 2009628 0 2020-03-23 00:01:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:01:15.250: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-b 2b6a0108-a313-48ea-80e8-ef0281146d33 2009628 0 2020-03-23 00:01:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 23 00:01:25.270: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-b 2b6a0108-a313-48ea-80e8-ef0281146d33 2009659 0 2020-03-23 00:01:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:01:25.271: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1679 /api/v1/namespaces/watch-1679/configmaps/e2e-watch-test-configmap-b 2b6a0108-a313-48ea-80e8-ef0281146d33 2009659 0 2020-03-23 00:01:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:01:35.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1679" for this suite. • [SLOW TEST:60.138 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":89,"skipped":1307,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:01:35.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-7e9fee40-82b5-4e3f-a06b-6b4fde8fcf83 STEP: Creating a pod to test consume secrets Mar 23 00:01:35.338: INFO: Waiting up to 5m0s for pod "pod-secrets-00b4d2f9-7be2-4699-b398-ac6ea123c089" in namespace "secrets-3569" to be "Succeeded or Failed" Mar 23 00:01:35.375: INFO: Pod "pod-secrets-00b4d2f9-7be2-4699-b398-ac6ea123c089": Phase="Pending", Reason="", readiness=false. Elapsed: 37.233568ms Mar 23 00:01:37.379: INFO: Pod "pod-secrets-00b4d2f9-7be2-4699-b398-ac6ea123c089": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040973243s Mar 23 00:01:39.384: INFO: Pod "pod-secrets-00b4d2f9-7be2-4699-b398-ac6ea123c089": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04528584s STEP: Saw pod success Mar 23 00:01:39.384: INFO: Pod "pod-secrets-00b4d2f9-7be2-4699-b398-ac6ea123c089" satisfied condition "Succeeded or Failed" Mar 23 00:01:39.387: INFO: Trying to get logs from node latest-worker pod pod-secrets-00b4d2f9-7be2-4699-b398-ac6ea123c089 container secret-volume-test: STEP: delete the pod Mar 23 00:01:39.455: INFO: Waiting for pod pod-secrets-00b4d2f9-7be2-4699-b398-ac6ea123c089 to disappear Mar 23 00:01:39.462: INFO: Pod pod-secrets-00b4d2f9-7be2-4699-b398-ac6ea123c089 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:01:39.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3569" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1318,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:01:39.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 00:01:39.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8bc0f85b-81fd-44ea-a9af-24ae90057681" in namespace "projected-6882" to be "Succeeded or Failed" Mar 23 00:01:39.528: INFO: Pod "downwardapi-volume-8bc0f85b-81fd-44ea-a9af-24ae90057681": Phase="Pending", Reason="", readiness=false. Elapsed: 3.247368ms Mar 23 00:01:41.532: INFO: Pod "downwardapi-volume-8bc0f85b-81fd-44ea-a9af-24ae90057681": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007293179s Mar 23 00:01:43.536: INFO: Pod "downwardapi-volume-8bc0f85b-81fd-44ea-a9af-24ae90057681": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011473901s STEP: Saw pod success Mar 23 00:01:43.536: INFO: Pod "downwardapi-volume-8bc0f85b-81fd-44ea-a9af-24ae90057681" satisfied condition "Succeeded or Failed" Mar 23 00:01:43.540: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8bc0f85b-81fd-44ea-a9af-24ae90057681 container client-container: STEP: delete the pod Mar 23 00:01:43.577: INFO: Waiting for pod downwardapi-volume-8bc0f85b-81fd-44ea-a9af-24ae90057681 to disappear Mar 23 00:01:43.584: INFO: Pod downwardapi-volume-8bc0f85b-81fd-44ea-a9af-24ae90057681 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:01:43.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6882" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:01:43.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:02:11.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9599" for this suite. • [SLOW TEST:27.416 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1358,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:02:11.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 23 00:02:15.126: INFO: &Pod{ObjectMeta:{send-events-5ea03a8a-b98a-484d-880c-1b4d24fb1ad6 events-9523 /api/v1/namespaces/events-9523/pods/send-events-5ea03a8a-b98a-484d-880c-1b4d24fb1ad6 fa2b3dd6-2d02-4d8d-90c8-f1b9851928b9 2009921 0 2020-03-23 00:02:11 +0000 UTC map[name:foo time:90296424] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8zdxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8zdxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8zdxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:02:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:02:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:02:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:02:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.60,StartTime:2020-03-23 00:02:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 00:02:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://f2e2086e41b2da3790bc2e588d0a1dad7b03649331f6c2ddabe5b6776b2e4fe1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 23 00:02:17.131: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 23 00:02:19.135: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:02:19.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9523" for this suite. • [SLOW TEST:8.170 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":93,"skipped":1373,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:02:19.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:02:19.263: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.668828ms) Mar 23 00:02:19.267: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.901422ms) Mar 23 00:02:19.270: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.296185ms) Mar 23 00:02:19.274: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.937857ms) Mar 23 00:02:19.277: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.39287ms) Mar 23 00:02:19.281: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.235368ms) Mar 23 00:02:19.284: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.306829ms) Mar 23 00:02:19.288: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.940119ms) Mar 23 00:02:19.292: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.546225ms) Mar 23 00:02:19.295: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.695235ms) Mar 23 00:02:19.299: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.96336ms) Mar 23 00:02:19.304: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.319214ms) Mar 23 00:02:19.307: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.47228ms) Mar 23 00:02:19.328: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 20.941336ms) Mar 23 00:02:19.332: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.727782ms) Mar 23 00:02:19.335: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.726117ms) Mar 23 00:02:19.338: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.953228ms) Mar 23 00:02:19.341: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.93538ms) Mar 23 00:02:19.344: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.860529ms) Mar 23 00:02:19.346: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.331696ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:02:19.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5510" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":94,"skipped":1394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:02:19.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 23 00:02:19.474: INFO: Waiting up to 5m0s for pod "downward-api-3a2cbdc3-821a-40a8-b1cb-e449db9bfd59" in namespace "downward-api-1626" to be "Succeeded or Failed" Mar 23 00:02:19.477: INFO: Pod "downward-api-3a2cbdc3-821a-40a8-b1cb-e449db9bfd59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.606641ms Mar 23 00:02:21.480: INFO: Pod "downward-api-3a2cbdc3-821a-40a8-b1cb-e449db9bfd59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005896559s Mar 23 00:02:23.484: INFO: Pod "downward-api-3a2cbdc3-821a-40a8-b1cb-e449db9bfd59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009798777s STEP: Saw pod success Mar 23 00:02:23.484: INFO: Pod "downward-api-3a2cbdc3-821a-40a8-b1cb-e449db9bfd59" satisfied condition "Succeeded or Failed" Mar 23 00:02:23.487: INFO: Trying to get logs from node latest-worker pod downward-api-3a2cbdc3-821a-40a8-b1cb-e449db9bfd59 container dapi-container: STEP: delete the pod Mar 23 00:02:23.546: INFO: Waiting for pod downward-api-3a2cbdc3-821a-40a8-b1cb-e449db9bfd59 to disappear Mar 23 00:02:23.553: INFO: Pod downward-api-3a2cbdc3-821a-40a8-b1cb-e449db9bfd59 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:02:23.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1626" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1418,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:02:23.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Mar 23 00:02:23.615: INFO: Waiting up to 5m0s for pod "pod-5cd2b537-6d5d-491a-9a36-436b74dae8d7" in namespace "emptydir-8154" to be "Succeeded or Failed" Mar 23 00:02:23.619: INFO: Pod "pod-5cd2b537-6d5d-491a-9a36-436b74dae8d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.722961ms Mar 23 00:02:25.623: INFO: Pod "pod-5cd2b537-6d5d-491a-9a36-436b74dae8d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007753764s Mar 23 00:02:27.627: INFO: Pod "pod-5cd2b537-6d5d-491a-9a36-436b74dae8d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012082799s STEP: Saw pod success Mar 23 00:02:27.627: INFO: Pod "pod-5cd2b537-6d5d-491a-9a36-436b74dae8d7" satisfied condition "Succeeded or Failed" Mar 23 00:02:27.635: INFO: Trying to get logs from node latest-worker pod pod-5cd2b537-6d5d-491a-9a36-436b74dae8d7 container test-container: STEP: delete the pod Mar 23 00:02:27.661: INFO: Waiting for pod pod-5cd2b537-6d5d-491a-9a36-436b74dae8d7 to disappear Mar 23 00:02:27.673: INFO: Pod pod-5cd2b537-6d5d-491a-9a36-436b74dae8d7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:02:27.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8154" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1425,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:02:27.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 23 00:02:27.759: INFO: Waiting up to 5m0s for pod "pod-4a936040-0a1e-42ce-a5b3-bc5dcbbce90e" in namespace "emptydir-1038" to be "Succeeded or Failed" Mar 23 00:02:27.763: INFO: Pod "pod-4a936040-0a1e-42ce-a5b3-bc5dcbbce90e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.71387ms Mar 23 00:02:29.769: INFO: Pod "pod-4a936040-0a1e-42ce-a5b3-bc5dcbbce90e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010118081s Mar 23 00:02:31.774: INFO: Pod "pod-4a936040-0a1e-42ce-a5b3-bc5dcbbce90e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014903853s STEP: Saw pod success Mar 23 00:02:31.774: INFO: Pod "pod-4a936040-0a1e-42ce-a5b3-bc5dcbbce90e" satisfied condition "Succeeded or Failed" Mar 23 00:02:31.778: INFO: Trying to get logs from node latest-worker2 pod pod-4a936040-0a1e-42ce-a5b3-bc5dcbbce90e container test-container: STEP: delete the pod Mar 23 00:02:31.812: INFO: Waiting for pod pod-4a936040-0a1e-42ce-a5b3-bc5dcbbce90e to disappear Mar 23 00:02:31.817: INFO: Pod pod-4a936040-0a1e-42ce-a5b3-bc5dcbbce90e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:02:31.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1038" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1426,"failed":0} S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:02:31.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-94a50885-91e7-4afe-8c3c-a1297e00ea1b in namespace container-probe-2047 Mar 23 00:02:35.924: INFO: Started pod liveness-94a50885-91e7-4afe-8c3c-a1297e00ea1b in namespace container-probe-2047 STEP: checking the pod's current state and verifying that restartCount is present Mar 23 00:02:35.927: INFO: Initial restart count of pod liveness-94a50885-91e7-4afe-8c3c-a1297e00ea1b is 0 Mar 23 00:02:55.970: INFO: Restart count of pod container-probe-2047/liveness-94a50885-91e7-4afe-8c3c-a1297e00ea1b is now 1 (20.042859691s elapsed) Mar 23 00:03:16.011: INFO: Restart count of pod container-probe-2047/liveness-94a50885-91e7-4afe-8c3c-a1297e00ea1b is now 2 (40.083680316s elapsed) Mar 23 00:03:36.054: INFO: Restart count of pod container-probe-2047/liveness-94a50885-91e7-4afe-8c3c-a1297e00ea1b is now 3 (1m0.12645534s elapsed) Mar 23 00:03:56.098: INFO: Restart count of pod container-probe-2047/liveness-94a50885-91e7-4afe-8c3c-a1297e00ea1b is now 4 (1m20.171111802s elapsed) Mar 23 00:05:06.263: INFO: Restart count of pod container-probe-2047/liveness-94a50885-91e7-4afe-8c3c-a1297e00ea1b is now 5 (2m30.335916757s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:05:06.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2047" for this suite. • [SLOW TEST:154.462 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1427,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:05:06.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:05:06.831: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:05:09.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518706, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518706, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518707, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518706, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 00:05:11.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518706, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518706, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518707, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518706, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:05:14.595: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:05:14.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2510" for this suite. STEP: Destroying namespace "webhook-2510-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.539 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":99,"skipped":1434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:05:14.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-rst9 STEP: Creating a pod to test atomic-volume-subpath Mar 23 00:05:14.920: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rst9" in namespace "subpath-8574" to be "Succeeded or Failed" Mar 23 00:05:14.924: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.899822ms Mar 23 00:05:16.928: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007576337s Mar 23 00:05:18.932: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Running", Reason="", readiness=true. Elapsed: 4.011532365s Mar 23 00:05:20.936: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Running", Reason="", readiness=true. Elapsed: 6.01605599s Mar 23 00:05:22.941: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Running", Reason="", readiness=true. Elapsed: 8.020224487s Mar 23 00:05:24.945: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Running", Reason="", readiness=true. Elapsed: 10.024740079s Mar 23 00:05:26.949: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Running", Reason="", readiness=true. Elapsed: 12.02907594s Mar 23 00:05:28.954: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Running", Reason="", readiness=true. Elapsed: 14.033694851s Mar 23 00:05:30.958: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Running", Reason="", readiness=true. Elapsed: 16.037849863s Mar 23 00:05:32.962: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Running", Reason="", readiness=true. Elapsed: 18.042110586s Mar 23 00:05:34.966: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Running", Reason="", readiness=true. Elapsed: 20.045939205s Mar 23 00:05:36.970: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Running", Reason="", readiness=true. Elapsed: 22.049332229s Mar 23 00:05:38.975: INFO: Pod "pod-subpath-test-projected-rst9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055128226s STEP: Saw pod success Mar 23 00:05:38.975: INFO: Pod "pod-subpath-test-projected-rst9" satisfied condition "Succeeded or Failed" Mar 23 00:05:38.978: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-rst9 container test-container-subpath-projected-rst9: STEP: delete the pod Mar 23 00:05:39.025: INFO: Waiting for pod pod-subpath-test-projected-rst9 to disappear Mar 23 00:05:39.029: INFO: Pod pod-subpath-test-projected-rst9 no longer exists STEP: Deleting pod pod-subpath-test-projected-rst9 Mar 23 00:05:39.029: INFO: Deleting pod "pod-subpath-test-projected-rst9" in namespace "subpath-8574" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:05:39.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8574" for this suite. • [SLOW TEST:24.213 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":100,"skipped":1460,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:05:39.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 23 00:05:42.221: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:05:42.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-694" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1471,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:05:42.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:05:43.071: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:05:45.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518743, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518743, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518743, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720518743, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:05:48.111: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:05:58.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4980" for this suite. STEP: Destroying namespace "webhook-4980-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.999 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":102,"skipped":1472,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:05:58.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5533.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5533.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5533.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5533.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5533.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5533.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 23 00:06:04.463: INFO: DNS probes using dns-5533/dns-test-c42d6c3b-3cc9-47df-88dd-d6cc13718307 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:06:04.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5533" for this suite. • [SLOW TEST:6.211 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":103,"skipped":1479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:06:04.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 23 00:06:04.617: INFO: namespace kubectl-4266 Mar 23 00:06:04.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4266' Mar 23 00:06:05.016: INFO: stderr: "" Mar 23 00:06:05.016: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 23 00:06:06.021: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:06:06.021: INFO: Found 0 / 1 Mar 23 00:06:07.027: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:06:07.027: INFO: Found 0 / 1 Mar 23 00:06:08.021: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:06:08.021: INFO: Found 0 / 1 Mar 23 00:06:09.028: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:06:09.028: INFO: Found 1 / 1 Mar 23 00:06:09.028: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 23 00:06:09.031: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:06:09.031: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 23 00:06:09.031: INFO: wait on agnhost-master startup in kubectl-4266 Mar 23 00:06:09.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-4rfzt agnhost-master --namespace=kubectl-4266' Mar 23 00:06:09.154: INFO: stderr: "" Mar 23 00:06:09.154: INFO: stdout: "Paused\n" STEP: exposing RC Mar 23 00:06:09.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4266' Mar 23 00:06:09.284: INFO: stderr: "" Mar 23 00:06:09.284: INFO: stdout: "service/rm2 exposed\n" Mar 23 00:06:09.292: INFO: Service rm2 in namespace kubectl-4266 found. STEP: exposing service Mar 23 00:06:11.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4266' Mar 23 00:06:11.436: INFO: stderr: "" Mar 23 00:06:11.436: INFO: stdout: "service/rm3 exposed\n" Mar 23 00:06:11.447: INFO: Service rm3 in namespace kubectl-4266 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:06:13.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4266" for this suite. • [SLOW TEST:8.909 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":104,"skipped":1543,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:06:13.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Mar 23 00:06:13.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Mar 23 00:06:13.698: INFO: stderr: "" Mar 23 00:06:13.698: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:06:13.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7633" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":105,"skipped":1570,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:06:13.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:06:24.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3739" for this suite. • [SLOW TEST:11.165 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":106,"skipped":1582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:06:24.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1233 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 23 00:06:24.994: INFO: Found 0 stateful pods, waiting for 3 Mar 23 00:06:34.999: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:06:34.999: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:06:34.999: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 23 00:06:44.999: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:06:44.999: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:06:44.999: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:06:45.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1233 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 00:06:45.265: INFO: stderr: "I0323 00:06:45.139538 870 log.go:172] (0xc000b8a000) (0xc000817400) Create stream\nI0323 00:06:45.139613 870 log.go:172] (0xc000b8a000) (0xc000817400) Stream added, broadcasting: 1\nI0323 00:06:45.144453 870 log.go:172] (0xc000b8a000) Reply frame received for 1\nI0323 00:06:45.144497 870 log.go:172] (0xc000b8a000) (0xc000ab6000) Create stream\nI0323 00:06:45.144511 870 log.go:172] (0xc000b8a000) (0xc000ab6000) Stream added, broadcasting: 3\nI0323 00:06:45.145846 870 log.go:172] (0xc000b8a000) Reply frame received for 3\nI0323 00:06:45.145874 870 log.go:172] (0xc000b8a000) (0xc000974000) Create stream\nI0323 00:06:45.145882 870 log.go:172] (0xc000b8a000) (0xc000974000) Stream added, broadcasting: 5\nI0323 00:06:45.146983 870 log.go:172] (0xc000b8a000) Reply frame received for 5\nI0323 00:06:45.227405 870 log.go:172] (0xc000b8a000) Data frame received for 5\nI0323 00:06:45.227431 870 log.go:172] (0xc000974000) (5) Data frame handling\nI0323 00:06:45.227446 870 log.go:172] (0xc000974000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 00:06:45.259376 870 log.go:172] (0xc000b8a000) Data frame received for 3\nI0323 00:06:45.259415 870 log.go:172] (0xc000ab6000) (3) Data frame handling\nI0323 00:06:45.259455 870 log.go:172] (0xc000ab6000) (3) Data frame sent\nI0323 00:06:45.259476 870 log.go:172] (0xc000b8a000) Data frame received for 3\nI0323 00:06:45.259497 870 log.go:172] (0xc000ab6000) (3) Data frame handling\nI0323 00:06:45.259694 870 log.go:172] (0xc000b8a000) Data frame received for 5\nI0323 00:06:45.259709 870 log.go:172] (0xc000974000) (5) Data frame handling\nI0323 00:06:45.261682 870 log.go:172] (0xc000b8a000) Data frame received for 1\nI0323 00:06:45.261699 870 log.go:172] (0xc000817400) (1) Data frame handling\nI0323 00:06:45.261713 870 log.go:172] (0xc000817400) (1) Data frame sent\nI0323 00:06:45.261724 870 log.go:172] (0xc000b8a000) (0xc000817400) Stream removed, broadcasting: 1\nI0323 00:06:45.262003 870 log.go:172] (0xc000b8a000) Go away received\nI0323 00:06:45.262053 870 log.go:172] (0xc000b8a000) (0xc000817400) Stream removed, broadcasting: 1\nI0323 00:06:45.262103 870 log.go:172] (0xc000b8a000) (0xc000ab6000) Stream removed, broadcasting: 3\nI0323 00:06:45.262132 870 log.go:172] (0xc000b8a000) (0xc000974000) Stream removed, broadcasting: 5\n" Mar 23 00:06:45.265: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 00:06:45.265: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 23 00:06:55.315: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 23 00:07:05.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1233 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 00:07:05.569: INFO: stderr: "I0323 00:07:05.494452 891 log.go:172] (0xc000768b00) (0xc0006cc320) Create stream\nI0323 00:07:05.494519 891 log.go:172] (0xc000768b00) (0xc0006cc320) Stream added, broadcasting: 1\nI0323 00:07:05.498128 891 log.go:172] (0xc000768b00) Reply frame received for 1\nI0323 00:07:05.498174 891 log.go:172] (0xc000768b00) (0xc0006cc3c0) Create stream\nI0323 00:07:05.498188 891 log.go:172] (0xc000768b00) (0xc0006cc3c0) Stream added, broadcasting: 3\nI0323 00:07:05.499261 891 log.go:172] (0xc000768b00) Reply frame received for 3\nI0323 00:07:05.499304 891 log.go:172] (0xc000768b00) (0xc0008b9d60) Create stream\nI0323 00:07:05.499319 891 log.go:172] (0xc000768b00) (0xc0008b9d60) Stream added, broadcasting: 5\nI0323 00:07:05.500310 891 log.go:172] (0xc000768b00) Reply frame received for 5\nI0323 00:07:05.562377 891 log.go:172] (0xc000768b00) Data frame received for 3\nI0323 00:07:05.562396 891 log.go:172] (0xc0006cc3c0) (3) Data frame handling\nI0323 00:07:05.562409 891 log.go:172] (0xc0006cc3c0) (3) Data frame sent\nI0323 00:07:05.562897 891 log.go:172] (0xc000768b00) Data frame received for 5\nI0323 00:07:05.562938 891 log.go:172] (0xc000768b00) Data frame received for 3\nI0323 00:07:05.563003 891 log.go:172] (0xc0006cc3c0) (3) Data frame handling\nI0323 00:07:05.563038 891 log.go:172] (0xc0008b9d60) (5) Data frame handling\nI0323 00:07:05.563068 891 log.go:172] (0xc0008b9d60) (5) Data frame sent\nI0323 00:07:05.563089 891 log.go:172] (0xc000768b00) Data frame received for 5\nI0323 00:07:05.563108 891 log.go:172] (0xc0008b9d60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 00:07:05.564677 891 log.go:172] (0xc000768b00) Data frame received for 1\nI0323 00:07:05.564713 891 log.go:172] (0xc0006cc320) (1) Data frame handling\nI0323 00:07:05.564741 891 log.go:172] (0xc0006cc320) (1) Data frame sent\nI0323 00:07:05.564764 891 log.go:172] (0xc000768b00) (0xc0006cc320) Stream removed, broadcasting: 1\nI0323 00:07:05.564802 891 log.go:172] (0xc000768b00) Go away received\nI0323 00:07:05.565354 891 log.go:172] (0xc000768b00) (0xc0006cc320) Stream removed, broadcasting: 1\nI0323 00:07:05.565385 891 log.go:172] (0xc000768b00) (0xc0006cc3c0) Stream removed, broadcasting: 3\nI0323 00:07:05.565402 891 log.go:172] (0xc000768b00) (0xc0008b9d60) Stream removed, broadcasting: 5\n" Mar 23 00:07:05.569: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 00:07:05.569: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 00:07:15.969: INFO: Waiting for StatefulSet statefulset-1233/ss2 to complete update Mar 23 00:07:15.970: INFO: Waiting for Pod statefulset-1233/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 23 00:07:15.970: INFO: Waiting for Pod statefulset-1233/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 23 00:07:25.977: INFO: Waiting for StatefulSet statefulset-1233/ss2 to complete update Mar 23 00:07:25.977: INFO: Waiting for Pod statefulset-1233/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 23 00:07:35.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1233 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 00:07:36.226: INFO: stderr: "I0323 00:07:36.107497 913 log.go:172] (0xc0009d40b0) (0xc0009b0140) Create stream\nI0323 00:07:36.107583 913 log.go:172] (0xc0009d40b0) (0xc0009b0140) Stream added, broadcasting: 1\nI0323 00:07:36.110786 913 log.go:172] (0xc0009d40b0) Reply frame received for 1\nI0323 00:07:36.110852 913 log.go:172] (0xc0009d40b0) (0xc000976000) Create stream\nI0323 00:07:36.110871 913 log.go:172] (0xc0009d40b0) (0xc000976000) Stream added, broadcasting: 3\nI0323 00:07:36.112185 913 log.go:172] (0xc0009d40b0) Reply frame received for 3\nI0323 00:07:36.112230 913 log.go:172] (0xc0009d40b0) (0xc0009760a0) Create stream\nI0323 00:07:36.112240 913 log.go:172] (0xc0009d40b0) (0xc0009760a0) Stream added, broadcasting: 5\nI0323 00:07:36.113285 913 log.go:172] (0xc0009d40b0) Reply frame received for 5\nI0323 00:07:36.187753 913 log.go:172] (0xc0009d40b0) Data frame received for 5\nI0323 00:07:36.187773 913 log.go:172] (0xc0009760a0) (5) Data frame handling\nI0323 00:07:36.187783 913 log.go:172] (0xc0009760a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 00:07:36.219597 913 log.go:172] (0xc0009d40b0) Data frame received for 5\nI0323 00:07:36.219824 913 log.go:172] (0xc0009760a0) (5) Data frame handling\nI0323 00:07:36.219965 913 log.go:172] (0xc0009d40b0) Data frame received for 3\nI0323 00:07:36.220019 913 log.go:172] (0xc000976000) (3) Data frame handling\nI0323 00:07:36.220052 913 log.go:172] (0xc000976000) (3) Data frame sent\nI0323 00:07:36.220083 913 log.go:172] (0xc0009d40b0) Data frame received for 3\nI0323 00:07:36.220106 913 log.go:172] (0xc000976000) (3) Data frame handling\nI0323 00:07:36.222299 913 log.go:172] (0xc0009d40b0) Data frame received for 1\nI0323 00:07:36.222315 913 log.go:172] (0xc0009b0140) (1) Data frame handling\nI0323 00:07:36.222328 913 log.go:172] (0xc0009b0140) (1) Data frame sent\nI0323 00:07:36.222336 913 log.go:172] (0xc0009d40b0) (0xc0009b0140) Stream removed, broadcasting: 1\nI0323 00:07:36.222555 913 log.go:172] (0xc0009d40b0) Go away received\nI0323 00:07:36.222709 913 log.go:172] (0xc0009d40b0) (0xc0009b0140) Stream removed, broadcasting: 1\nI0323 00:07:36.222731 913 log.go:172] (0xc0009d40b0) (0xc000976000) Stream removed, broadcasting: 3\nI0323 00:07:36.222742 913 log.go:172] (0xc0009d40b0) (0xc0009760a0) Stream removed, broadcasting: 5\n" Mar 23 00:07:36.226: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 00:07:36.226: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 00:07:46.258: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 23 00:07:56.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1233 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 00:07:56.528: INFO: stderr: "I0323 00:07:56.429206 933 log.go:172] (0xc0000e04d0) (0xc0003420a0) Create stream\nI0323 00:07:56.429273 933 log.go:172] (0xc0000e04d0) (0xc0003420a0) Stream added, broadcasting: 1\nI0323 00:07:56.430720 933 log.go:172] (0xc0000e04d0) Reply frame received for 1\nI0323 00:07:56.430821 933 log.go:172] (0xc0000e04d0) (0xc0008a4000) Create stream\nI0323 00:07:56.430833 933 log.go:172] (0xc0000e04d0) (0xc0008a4000) Stream added, broadcasting: 3\nI0323 00:07:56.431575 933 log.go:172] (0xc0000e04d0) Reply frame received for 3\nI0323 00:07:56.431611 933 log.go:172] (0xc0000e04d0) (0xc000342140) Create stream\nI0323 00:07:56.431620 933 log.go:172] (0xc0000e04d0) (0xc000342140) Stream added, broadcasting: 5\nI0323 00:07:56.432328 933 log.go:172] (0xc0000e04d0) Reply frame received for 5\nI0323 00:07:56.521528 933 log.go:172] (0xc0000e04d0) Data frame received for 3\nI0323 00:07:56.521571 933 log.go:172] (0xc0008a4000) (3) Data frame handling\nI0323 00:07:56.521606 933 log.go:172] (0xc0008a4000) (3) Data frame sent\nI0323 00:07:56.521665 933 log.go:172] (0xc0000e04d0) Data frame received for 5\nI0323 00:07:56.521704 933 log.go:172] (0xc000342140) (5) Data frame handling\nI0323 00:07:56.521719 933 log.go:172] (0xc000342140) (5) Data frame sent\nI0323 00:07:56.521731 933 log.go:172] (0xc0000e04d0) Data frame received for 5\nI0323 00:07:56.521748 933 log.go:172] (0xc000342140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 00:07:56.521792 933 log.go:172] (0xc0000e04d0) Data frame received for 3\nI0323 00:07:56.521823 933 log.go:172] (0xc0008a4000) (3) Data frame handling\nI0323 00:07:56.523468 933 log.go:172] (0xc0000e04d0) Data frame received for 1\nI0323 00:07:56.523501 933 log.go:172] (0xc0003420a0) (1) Data frame handling\nI0323 00:07:56.523522 933 log.go:172] (0xc0003420a0) (1) Data frame sent\nI0323 00:07:56.523567 933 log.go:172] (0xc0000e04d0) (0xc0003420a0) Stream removed, broadcasting: 1\nI0323 00:07:56.523603 933 log.go:172] (0xc0000e04d0) Go away received\nI0323 00:07:56.524075 933 log.go:172] (0xc0000e04d0) (0xc0003420a0) Stream removed, broadcasting: 1\nI0323 00:07:56.524097 933 log.go:172] (0xc0000e04d0) (0xc0008a4000) Stream removed, broadcasting: 3\nI0323 00:07:56.524110 933 log.go:172] (0xc0000e04d0) (0xc000342140) Stream removed, broadcasting: 5\n" Mar 23 00:07:56.528: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 00:07:56.528: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 23 00:08:16.550: INFO: Deleting all statefulset in ns statefulset-1233 Mar 23 00:08:16.553: INFO: Scaling statefulset ss2 to 0 Mar 23 00:08:36.582: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 00:08:36.586: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:08:36.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1233" for this suite. • [SLOW TEST:131.731 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":107,"skipped":1640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:08:36.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-67de564d-e19f-4746-9f53-aabda708eef4 STEP: Creating a pod to test consume configMaps Mar 23 00:08:36.674: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e2783d90-00db-4ce0-983e-024ce2135078" in namespace "projected-3774" to be "Succeeded or Failed" Mar 23 00:08:36.680: INFO: Pod "pod-projected-configmaps-e2783d90-00db-4ce0-983e-024ce2135078": Phase="Pending", Reason="", readiness=false. Elapsed: 5.750006ms Mar 23 00:08:38.684: INFO: Pod "pod-projected-configmaps-e2783d90-00db-4ce0-983e-024ce2135078": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010133735s Mar 23 00:08:40.689: INFO: Pod "pod-projected-configmaps-e2783d90-00db-4ce0-983e-024ce2135078": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014491904s STEP: Saw pod success Mar 23 00:08:40.689: INFO: Pod "pod-projected-configmaps-e2783d90-00db-4ce0-983e-024ce2135078" satisfied condition "Succeeded or Failed" Mar 23 00:08:40.692: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-e2783d90-00db-4ce0-983e-024ce2135078 container projected-configmap-volume-test: STEP: delete the pod Mar 23 00:08:40.728: INFO: Waiting for pod pod-projected-configmaps-e2783d90-00db-4ce0-983e-024ce2135078 to disappear Mar 23 00:08:40.732: INFO: Pod pod-projected-configmaps-e2783d90-00db-4ce0-983e-024ce2135078 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:08:40.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3774" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1665,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:08:40.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8882 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-8882 Mar 23 00:08:40.828: INFO: Found 0 stateful pods, waiting for 1 Mar 23 00:08:50.832: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 23 00:08:50.871: INFO: Deleting all statefulset in ns statefulset-8882 Mar 23 00:08:50.878: INFO: Scaling statefulset ss to 0 Mar 23 00:09:10.930: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 00:09:10.934: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:09:10.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8882" for this suite. • [SLOW TEST:30.221 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":109,"skipped":1686,"failed":0} S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:09:10.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 23 00:09:15.551: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6345fd13-e0aa-457d-95e6-4c250496eb58" Mar 23 00:09:15.551: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6345fd13-e0aa-457d-95e6-4c250496eb58" in namespace "pods-9286" to be "terminated due to deadline exceeded" Mar 23 00:09:15.609: INFO: Pod "pod-update-activedeadlineseconds-6345fd13-e0aa-457d-95e6-4c250496eb58": Phase="Running", Reason="", readiness=true. Elapsed: 58.635188ms Mar 23 00:09:17.613: INFO: Pod "pod-update-activedeadlineseconds-6345fd13-e0aa-457d-95e6-4c250496eb58": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.062654952s Mar 23 00:09:17.613: INFO: Pod "pod-update-activedeadlineseconds-6345fd13-e0aa-457d-95e6-4c250496eb58" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:09:17.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9286" for this suite. • [SLOW TEST:6.662 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1687,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:09:17.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 23 00:09:17.746: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:17.751: INFO: Number of nodes with available pods: 0 Mar 23 00:09:17.751: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:09:18.756: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:18.759: INFO: Number of nodes with available pods: 0 Mar 23 00:09:18.759: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:09:19.756: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:19.760: INFO: Number of nodes with available pods: 0 Mar 23 00:09:19.760: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:09:20.756: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:20.759: INFO: Number of nodes with available pods: 0 Mar 23 00:09:20.759: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:09:21.756: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:21.760: INFO: Number of nodes with available pods: 2 Mar 23 00:09:21.760: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 23 00:09:21.778: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:21.781: INFO: Number of nodes with available pods: 1 Mar 23 00:09:21.781: INFO: Node latest-worker2 is running more than one daemon pod Mar 23 00:09:22.786: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:22.800: INFO: Number of nodes with available pods: 1 Mar 23 00:09:22.800: INFO: Node latest-worker2 is running more than one daemon pod Mar 23 00:09:23.786: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:23.790: INFO: Number of nodes with available pods: 1 Mar 23 00:09:23.790: INFO: Node latest-worker2 is running more than one daemon pod Mar 23 00:09:24.789: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:24.792: INFO: Number of nodes with available pods: 1 Mar 23 00:09:24.792: INFO: Node latest-worker2 is running more than one daemon pod Mar 23 00:09:25.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:25.791: INFO: Number of nodes with available pods: 1 Mar 23 00:09:25.791: INFO: Node latest-worker2 is running more than one daemon pod Mar 23 00:09:26.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:26.791: INFO: Number of nodes with available pods: 1 Mar 23 00:09:26.791: INFO: Node latest-worker2 is running more than one daemon pod Mar 23 00:09:27.785: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:27.788: INFO: Number of nodes with available pods: 1 Mar 23 00:09:27.788: INFO: Node latest-worker2 is running more than one daemon pod Mar 23 00:09:28.786: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:28.789: INFO: Number of nodes with available pods: 2 Mar 23 00:09:28.789: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5845, will wait for the garbage collector to delete the pods Mar 23 00:09:28.850: INFO: Deleting DaemonSet.extensions daemon-set took: 5.451433ms Mar 23 00:09:29.150: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.235855ms Mar 23 00:09:43.054: INFO: Number of nodes with available pods: 0 Mar 23 00:09:43.054: INFO: Number of running nodes: 0, number of available pods: 0 Mar 23 00:09:43.060: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5845/daemonsets","resourceVersion":"2012249"},"items":null} Mar 23 00:09:43.063: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5845/pods","resourceVersion":"2012249"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:09:43.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5845" for this suite. • [SLOW TEST:25.457 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":111,"skipped":1694,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:09:43.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 00:09:43.151: INFO: Waiting up to 5m0s for pod "downwardapi-volume-226f131c-65cb-4963-ab22-590b45d0066a" in namespace "downward-api-1161" to be "Succeeded or Failed" Mar 23 00:09:43.159: INFO: Pod "downwardapi-volume-226f131c-65cb-4963-ab22-590b45d0066a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190114ms Mar 23 00:09:45.162: INFO: Pod "downwardapi-volume-226f131c-65cb-4963-ab22-590b45d0066a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011625248s Mar 23 00:09:47.172: INFO: Pod "downwardapi-volume-226f131c-65cb-4963-ab22-590b45d0066a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0211008s STEP: Saw pod success Mar 23 00:09:47.172: INFO: Pod "downwardapi-volume-226f131c-65cb-4963-ab22-590b45d0066a" satisfied condition "Succeeded or Failed" Mar 23 00:09:47.183: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-226f131c-65cb-4963-ab22-590b45d0066a container client-container: STEP: delete the pod Mar 23 00:09:47.209: INFO: Waiting for pod downwardapi-volume-226f131c-65cb-4963-ab22-590b45d0066a to disappear Mar 23 00:09:47.213: INFO: Pod downwardapi-volume-226f131c-65cb-4963-ab22-590b45d0066a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:09:47.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1161" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1702,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:09:47.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 23 00:09:51.848: INFO: Successfully updated pod "pod-update-57d2baf7-4702-4863-b898-381cdf874161" STEP: verifying the updated pod is in kubernetes Mar 23 00:09:51.869: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:09:51.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5359" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1717,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:09:51.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:09:51.971: INFO: Create a RollingUpdate DaemonSet Mar 23 00:09:51.974: INFO: Check that daemon pods launch on every node of the cluster Mar 23 00:09:51.981: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:51.986: INFO: Number of nodes with available pods: 0 Mar 23 00:09:51.986: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:09:52.992: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:52.995: INFO: Number of nodes with available pods: 0 Mar 23 00:09:52.995: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:09:53.992: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:53.995: INFO: Number of nodes with available pods: 0 Mar 23 00:09:53.995: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:09:54.991: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:54.994: INFO: Number of nodes with available pods: 1 Mar 23 00:09:54.994: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:09:55.992: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:09:55.995: INFO: Number of nodes with available pods: 2 Mar 23 00:09:55.995: INFO: Number of running nodes: 2, number of available pods: 2 Mar 23 00:09:55.995: INFO: Update the DaemonSet to trigger a rollout Mar 23 00:09:56.002: INFO: Updating DaemonSet daemon-set Mar 23 00:10:04.019: INFO: Roll back the DaemonSet before rollout is complete Mar 23 00:10:04.024: INFO: Updating DaemonSet daemon-set Mar 23 00:10:04.024: INFO: Make sure DaemonSet rollback is complete Mar 23 00:10:04.052: INFO: Wrong image for pod: daemon-set-xsm5d. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 23 00:10:04.052: INFO: Pod daemon-set-xsm5d is not available Mar 23 00:10:04.072: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:10:05.075: INFO: Wrong image for pod: daemon-set-xsm5d. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 23 00:10:05.075: INFO: Pod daemon-set-xsm5d is not available Mar 23 00:10:05.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:10:06.076: INFO: Wrong image for pod: daemon-set-xsm5d. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 23 00:10:06.076: INFO: Pod daemon-set-xsm5d is not available Mar 23 00:10:06.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:10:07.076: INFO: Pod daemon-set-6zmfz is not available Mar 23 00:10:07.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8082, will wait for the garbage collector to delete the pods Mar 23 00:10:07.147: INFO: Deleting DaemonSet.extensions daemon-set took: 6.60266ms Mar 23 00:10:07.547: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.303432ms Mar 23 00:10:22.851: INFO: Number of nodes with available pods: 0 Mar 23 00:10:22.851: INFO: Number of running nodes: 0, number of available pods: 0 Mar 23 00:10:22.854: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8082/daemonsets","resourceVersion":"2012510"},"items":null} Mar 23 00:10:22.856: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8082/pods","resourceVersion":"2012510"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:10:22.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8082" for this suite. • [SLOW TEST:30.996 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":114,"skipped":1717,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:10:22.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:10:23.834: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:10:25.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519023, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519023, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519023, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519023, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:10:28.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:10:28.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1126" for this suite. STEP: Destroying namespace "webhook-1126-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.151 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":115,"skipped":1735,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:10:29.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:10:33.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4739" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1736,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:10:33.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-986c3b58-66af-47ff-a2cc-a1b6b3117b6b Mar 23 00:10:33.185: INFO: Pod name my-hostname-basic-986c3b58-66af-47ff-a2cc-a1b6b3117b6b: Found 0 pods out of 1 Mar 23 00:10:38.197: INFO: Pod name my-hostname-basic-986c3b58-66af-47ff-a2cc-a1b6b3117b6b: Found 1 pods out of 1 Mar 23 00:10:38.197: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-986c3b58-66af-47ff-a2cc-a1b6b3117b6b" are running Mar 23 00:10:38.203: INFO: Pod "my-hostname-basic-986c3b58-66af-47ff-a2cc-a1b6b3117b6b-drx9b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 00:10:33 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 00:10:36 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 00:10:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 00:10:33 +0000 UTC Reason: Message:}]) Mar 23 00:10:38.203: INFO: Trying to dial the pod Mar 23 00:10:43.216: INFO: Controller my-hostname-basic-986c3b58-66af-47ff-a2cc-a1b6b3117b6b: Got expected result from replica 1 [my-hostname-basic-986c3b58-66af-47ff-a2cc-a1b6b3117b6b-drx9b]: "my-hostname-basic-986c3b58-66af-47ff-a2cc-a1b6b3117b6b-drx9b", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:10:43.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3553" for this suite. • [SLOW TEST:10.107 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":117,"skipped":1744,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:10:43.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:10:43.270: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:10:47.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7608" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":1748,"failed":0} SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:10:47.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:10:47.498: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3714 I0323 00:10:47.511447 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3714, replica count: 1 I0323 00:10:48.561905 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 00:10:49.562121 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 00:10:50.562396 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 00:10:51.562667 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 23 00:10:51.693: INFO: Created: latency-svc-wfdjx Mar 23 00:10:51.744: INFO: Got endpoints: latency-svc-wfdjx [81.617231ms] Mar 23 00:10:51.772: INFO: Created: latency-svc-f46gw Mar 23 00:10:51.784: INFO: Got endpoints: latency-svc-f46gw [39.756234ms] Mar 23 00:10:51.814: INFO: Created: latency-svc-n2njk Mar 23 00:10:51.826: INFO: Got endpoints: latency-svc-n2njk [82.315216ms] Mar 23 00:10:51.911: INFO: Created: latency-svc-sltn4 Mar 23 00:10:51.933: INFO: Got endpoints: latency-svc-sltn4 [188.868317ms] Mar 23 00:10:51.934: INFO: Created: latency-svc-dgr7h Mar 23 00:10:51.947: INFO: Got endpoints: latency-svc-dgr7h [202.604555ms] Mar 23 00:10:51.963: INFO: Created: latency-svc-tvx5d Mar 23 00:10:51.977: INFO: Got endpoints: latency-svc-tvx5d [232.289526ms] Mar 23 00:10:51.992: INFO: Created: latency-svc-qcwnt Mar 23 00:10:52.007: INFO: Got endpoints: latency-svc-qcwnt [262.342213ms] Mar 23 00:10:52.042: INFO: Created: latency-svc-rdmb5 Mar 23 00:10:52.060: INFO: Got endpoints: latency-svc-rdmb5 [315.605539ms] Mar 23 00:10:52.061: INFO: Created: latency-svc-jhpmq Mar 23 00:10:52.072: INFO: Got endpoints: latency-svc-jhpmq [328.188097ms] Mar 23 00:10:52.090: INFO: Created: latency-svc-9zmn7 Mar 23 00:10:52.103: INFO: Got endpoints: latency-svc-9zmn7 [358.152906ms] Mar 23 00:10:52.120: INFO: Created: latency-svc-nvjws Mar 23 00:10:52.132: INFO: Got endpoints: latency-svc-nvjws [387.354138ms] Mar 23 00:10:52.168: INFO: Created: latency-svc-h9mkd Mar 23 00:10:52.180: INFO: Got endpoints: latency-svc-h9mkd [435.233055ms] Mar 23 00:10:52.198: INFO: Created: latency-svc-css2d Mar 23 00:10:52.215: INFO: Got endpoints: latency-svc-css2d [470.999519ms] Mar 23 00:10:52.232: INFO: Created: latency-svc-6lp8r Mar 23 00:10:52.247: INFO: Got endpoints: latency-svc-6lp8r [502.122236ms] Mar 23 00:10:52.294: INFO: Created: latency-svc-hl46f Mar 23 00:10:52.324: INFO: Created: latency-svc-xfvdz Mar 23 00:10:52.325: INFO: Got endpoints: latency-svc-hl46f [580.162623ms] Mar 23 00:10:52.341: INFO: Got endpoints: latency-svc-xfvdz [596.859653ms] Mar 23 00:10:52.360: INFO: Created: latency-svc-wz7qr Mar 23 00:10:52.371: INFO: Got endpoints: latency-svc-wz7qr [587.160239ms] Mar 23 00:10:52.426: INFO: Created: latency-svc-9jjc5 Mar 23 00:10:52.443: INFO: Got endpoints: latency-svc-9jjc5 [616.0489ms] Mar 23 00:10:52.443: INFO: Created: latency-svc-6nxmz Mar 23 00:10:52.467: INFO: Got endpoints: latency-svc-6nxmz [533.682721ms] Mar 23 00:10:52.497: INFO: Created: latency-svc-j29zc Mar 23 00:10:52.516: INFO: Got endpoints: latency-svc-j29zc [568.716226ms] Mar 23 00:10:52.600: INFO: Created: latency-svc-22tpt Mar 23 00:10:52.619: INFO: Got endpoints: latency-svc-22tpt [642.625281ms] Mar 23 00:10:52.653: INFO: Created: latency-svc-f4gvl Mar 23 00:10:52.666: INFO: Got endpoints: latency-svc-f4gvl [659.106491ms] Mar 23 00:10:52.684: INFO: Created: latency-svc-l5mz2 Mar 23 00:10:52.713: INFO: Got endpoints: latency-svc-l5mz2 [652.535498ms] Mar 23 00:10:52.732: INFO: Created: latency-svc-hdjhl Mar 23 00:10:52.750: INFO: Got endpoints: latency-svc-hdjhl [677.430564ms] Mar 23 00:10:52.803: INFO: Created: latency-svc-g6mx7 Mar 23 00:10:52.832: INFO: Got endpoints: latency-svc-g6mx7 [729.455395ms] Mar 23 00:10:52.863: INFO: Created: latency-svc-5jdfh Mar 23 00:10:52.880: INFO: Got endpoints: latency-svc-5jdfh [748.323445ms] Mar 23 00:10:52.971: INFO: Created: latency-svc-k2m4m Mar 23 00:10:52.978: INFO: Got endpoints: latency-svc-k2m4m [798.539863ms] Mar 23 00:10:53.007: INFO: Created: latency-svc-25dbq Mar 23 00:10:53.024: INFO: Got endpoints: latency-svc-25dbq [808.811663ms] Mar 23 00:10:53.045: INFO: Created: latency-svc-6tdh4 Mar 23 00:10:53.060: INFO: Got endpoints: latency-svc-6tdh4 [813.202194ms] Mar 23 00:10:53.115: INFO: Created: latency-svc-njv52 Mar 23 00:10:53.135: INFO: Got endpoints: latency-svc-njv52 [810.040393ms] Mar 23 00:10:53.136: INFO: Created: latency-svc-tjnzx Mar 23 00:10:53.156: INFO: Got endpoints: latency-svc-tjnzx [814.876777ms] Mar 23 00:10:53.187: INFO: Created: latency-svc-w455v Mar 23 00:10:53.199: INFO: Got endpoints: latency-svc-w455v [827.358816ms] Mar 23 00:10:53.246: INFO: Created: latency-svc-v4lw6 Mar 23 00:10:53.266: INFO: Got endpoints: latency-svc-v4lw6 [823.723433ms] Mar 23 00:10:53.267: INFO: Created: latency-svc-45lmg Mar 23 00:10:53.282: INFO: Got endpoints: latency-svc-45lmg [815.495164ms] Mar 23 00:10:53.308: INFO: Created: latency-svc-kcj2h Mar 23 00:10:53.339: INFO: Got endpoints: latency-svc-kcj2h [822.951882ms] Mar 23 00:10:53.385: INFO: Created: latency-svc-hvwmk Mar 23 00:10:53.397: INFO: Got endpoints: latency-svc-hvwmk [777.270694ms] Mar 23 00:10:53.414: INFO: Created: latency-svc-ttx7d Mar 23 00:10:53.426: INFO: Got endpoints: latency-svc-ttx7d [760.593971ms] Mar 23 00:10:53.445: INFO: Created: latency-svc-4c9cf Mar 23 00:10:53.457: INFO: Got endpoints: latency-svc-4c9cf [743.75481ms] Mar 23 00:10:53.476: INFO: Created: latency-svc-xkn8m Mar 23 00:10:53.527: INFO: Got endpoints: latency-svc-xkn8m [777.288592ms] Mar 23 00:10:53.542: INFO: Created: latency-svc-pp2nd Mar 23 00:10:53.557: INFO: Got endpoints: latency-svc-pp2nd [725.090578ms] Mar 23 00:10:53.589: INFO: Created: latency-svc-lzb7q Mar 23 00:10:53.617: INFO: Got endpoints: latency-svc-lzb7q [737.130136ms] Mar 23 00:10:53.665: INFO: Created: latency-svc-kb5q6 Mar 23 00:10:53.671: INFO: Got endpoints: latency-svc-kb5q6 [693.023519ms] Mar 23 00:10:53.692: INFO: Created: latency-svc-w7vsb Mar 23 00:10:53.716: INFO: Got endpoints: latency-svc-w7vsb [691.674122ms] Mar 23 00:10:53.740: INFO: Created: latency-svc-qcv2r Mar 23 00:10:53.749: INFO: Got endpoints: latency-svc-qcv2r [689.377429ms] Mar 23 00:10:53.764: INFO: Created: latency-svc-s5sgq Mar 23 00:10:53.803: INFO: Got endpoints: latency-svc-s5sgq [667.974558ms] Mar 23 00:10:53.841: INFO: Created: latency-svc-hwdnw Mar 23 00:10:53.871: INFO: Got endpoints: latency-svc-hwdnw [714.203381ms] Mar 23 00:10:53.895: INFO: Created: latency-svc-zn9zb Mar 23 00:10:53.934: INFO: Got endpoints: latency-svc-zn9zb [735.693734ms] Mar 23 00:10:53.938: INFO: Created: latency-svc-5s6n9 Mar 23 00:10:53.954: INFO: Got endpoints: latency-svc-5s6n9 [687.610717ms] Mar 23 00:10:53.974: INFO: Created: latency-svc-xg76t Mar 23 00:10:54.006: INFO: Got endpoints: latency-svc-xg76t [724.012176ms] Mar 23 00:10:54.027: INFO: Created: latency-svc-27jcc Mar 23 00:10:54.054: INFO: Got endpoints: latency-svc-27jcc [715.082885ms] Mar 23 00:10:54.068: INFO: Created: latency-svc-8sxnw Mar 23 00:10:54.079: INFO: Got endpoints: latency-svc-8sxnw [682.375458ms] Mar 23 00:10:54.092: INFO: Created: latency-svc-lgfqw Mar 23 00:10:54.103: INFO: Got endpoints: latency-svc-lgfqw [677.106685ms] Mar 23 00:10:54.124: INFO: Created: latency-svc-v5526 Mar 23 00:10:54.139: INFO: Got endpoints: latency-svc-v5526 [682.051123ms] Mar 23 00:10:54.179: INFO: Created: latency-svc-nrtfj Mar 23 00:10:54.203: INFO: Got endpoints: latency-svc-nrtfj [675.447576ms] Mar 23 00:10:54.203: INFO: Created: latency-svc-gtd8k Mar 23 00:10:54.216: INFO: Got endpoints: latency-svc-gtd8k [658.990943ms] Mar 23 00:10:54.267: INFO: Created: latency-svc-hzjjn Mar 23 00:10:54.299: INFO: Got endpoints: latency-svc-hzjjn [681.916504ms] Mar 23 00:10:54.328: INFO: Created: latency-svc-qcjb7 Mar 23 00:10:54.348: INFO: Got endpoints: latency-svc-qcjb7 [677.133236ms] Mar 23 00:10:54.370: INFO: Created: latency-svc-l99p7 Mar 23 00:10:54.384: INFO: Got endpoints: latency-svc-l99p7 [667.94857ms] Mar 23 00:10:54.432: INFO: Created: latency-svc-jxzbx Mar 23 00:10:54.452: INFO: Got endpoints: latency-svc-jxzbx [703.011804ms] Mar 23 00:10:54.454: INFO: Created: latency-svc-pvlmz Mar 23 00:10:54.477: INFO: Got endpoints: latency-svc-pvlmz [673.701096ms] Mar 23 00:10:54.500: INFO: Created: latency-svc-tmhkk Mar 23 00:10:54.511: INFO: Got endpoints: latency-svc-tmhkk [640.040319ms] Mar 23 00:10:54.569: INFO: Created: latency-svc-s8stg Mar 23 00:10:54.610: INFO: Got endpoints: latency-svc-s8stg [675.894367ms] Mar 23 00:10:54.611: INFO: Created: latency-svc-4dr56 Mar 23 00:10:54.639: INFO: Got endpoints: latency-svc-4dr56 [684.679392ms] Mar 23 00:10:54.701: INFO: Created: latency-svc-xdqcs Mar 23 00:10:54.723: INFO: Created: latency-svc-psfrs Mar 23 00:10:54.723: INFO: Got endpoints: latency-svc-xdqcs [716.232602ms] Mar 23 00:10:54.748: INFO: Got endpoints: latency-svc-psfrs [694.240395ms] Mar 23 00:10:54.778: INFO: Created: latency-svc-vcjhc Mar 23 00:10:54.792: INFO: Got endpoints: latency-svc-vcjhc [712.336678ms] Mar 23 00:10:54.821: INFO: Created: latency-svc-7zh9m Mar 23 00:10:54.833: INFO: Got endpoints: latency-svc-7zh9m [729.950176ms] Mar 23 00:10:54.855: INFO: Created: latency-svc-d9dq2 Mar 23 00:10:54.902: INFO: Got endpoints: latency-svc-d9dq2 [763.80751ms] Mar 23 00:10:54.952: INFO: Created: latency-svc-fh9fg Mar 23 00:10:54.959: INFO: Got endpoints: latency-svc-fh9fg [756.190724ms] Mar 23 00:10:54.976: INFO: Created: latency-svc-k28tv Mar 23 00:10:54.990: INFO: Got endpoints: latency-svc-k28tv [773.203137ms] Mar 23 00:10:55.006: INFO: Created: latency-svc-6wmr2 Mar 23 00:10:55.019: INFO: Got endpoints: latency-svc-6wmr2 [719.95046ms] Mar 23 00:10:55.078: INFO: Created: latency-svc-qltd9 Mar 23 00:10:55.101: INFO: Created: latency-svc-dxmq8 Mar 23 00:10:55.101: INFO: Got endpoints: latency-svc-qltd9 [752.745886ms] Mar 23 00:10:55.110: INFO: Got endpoints: latency-svc-dxmq8 [725.903684ms] Mar 23 00:10:55.131: INFO: Created: latency-svc-5z4gf Mar 23 00:10:55.140: INFO: Got endpoints: latency-svc-5z4gf [687.220077ms] Mar 23 00:10:55.156: INFO: Created: latency-svc-9vsdw Mar 23 00:10:55.172: INFO: Got endpoints: latency-svc-9vsdw [695.533588ms] Mar 23 00:10:55.198: INFO: Created: latency-svc-bchxj Mar 23 00:10:55.216: INFO: Got endpoints: latency-svc-bchxj [705.649018ms] Mar 23 00:10:55.218: INFO: Created: latency-svc-58dgb Mar 23 00:10:55.245: INFO: Got endpoints: latency-svc-58dgb [634.220209ms] Mar 23 00:10:55.262: INFO: Created: latency-svc-4ctqd Mar 23 00:10:55.272: INFO: Got endpoints: latency-svc-4ctqd [633.193914ms] Mar 23 00:10:55.287: INFO: Created: latency-svc-h7rvl Mar 23 00:10:55.329: INFO: Got endpoints: latency-svc-h7rvl [606.53027ms] Mar 23 00:10:55.336: INFO: Created: latency-svc-9s8fr Mar 23 00:10:55.348: INFO: Got endpoints: latency-svc-9s8fr [600.336822ms] Mar 23 00:10:55.366: INFO: Created: latency-svc-bwn6c Mar 23 00:10:55.379: INFO: Got endpoints: latency-svc-bwn6c [587.341571ms] Mar 23 00:10:55.396: INFO: Created: latency-svc-fkmvq Mar 23 00:10:55.408: INFO: Got endpoints: latency-svc-fkmvq [574.866015ms] Mar 23 00:10:55.456: INFO: Created: latency-svc-snnqt Mar 23 00:10:55.479: INFO: Got endpoints: latency-svc-snnqt [576.350154ms] Mar 23 00:10:55.480: INFO: Created: latency-svc-xzj6h Mar 23 00:10:55.493: INFO: Got endpoints: latency-svc-xzj6h [533.549327ms] Mar 23 00:10:55.514: INFO: Created: latency-svc-tq6wh Mar 23 00:10:55.529: INFO: Got endpoints: latency-svc-tq6wh [539.661391ms] Mar 23 00:10:55.593: INFO: Created: latency-svc-vnxz5 Mar 23 00:10:55.619: INFO: Got endpoints: latency-svc-vnxz5 [599.113514ms] Mar 23 00:10:55.620: INFO: Created: latency-svc-qgph2 Mar 23 00:10:55.638: INFO: Got endpoints: latency-svc-qgph2 [536.375083ms] Mar 23 00:10:55.659: INFO: Created: latency-svc-vkx4d Mar 23 00:10:55.683: INFO: Got endpoints: latency-svc-vkx4d [572.840494ms] Mar 23 00:10:55.737: INFO: Created: latency-svc-sv6tn Mar 23 00:10:55.754: INFO: Got endpoints: latency-svc-sv6tn [614.527986ms] Mar 23 00:10:55.755: INFO: Created: latency-svc-p7z25 Mar 23 00:10:55.763: INFO: Got endpoints: latency-svc-p7z25 [590.490289ms] Mar 23 00:10:55.780: INFO: Created: latency-svc-r7kqw Mar 23 00:10:55.804: INFO: Got endpoints: latency-svc-r7kqw [587.681043ms] Mar 23 00:10:55.835: INFO: Created: latency-svc-45s2t Mar 23 00:10:55.886: INFO: Got endpoints: latency-svc-45s2t [641.520569ms] Mar 23 00:10:55.937: INFO: Created: latency-svc-rpmlv Mar 23 00:10:55.961: INFO: Got endpoints: latency-svc-rpmlv [689.044072ms] Mar 23 00:10:55.983: INFO: Created: latency-svc-7c657 Mar 23 00:10:56.030: INFO: Got endpoints: latency-svc-7c657 [700.830288ms] Mar 23 00:10:56.061: INFO: Created: latency-svc-f884j Mar 23 00:10:56.086: INFO: Got endpoints: latency-svc-f884j [737.32105ms] Mar 23 00:10:56.121: INFO: Created: latency-svc-55t6c Mar 23 00:10:56.150: INFO: Got endpoints: latency-svc-55t6c [771.369617ms] Mar 23 00:10:56.171: INFO: Created: latency-svc-6bs44 Mar 23 00:10:56.189: INFO: Got endpoints: latency-svc-6bs44 [780.143205ms] Mar 23 00:10:56.238: INFO: Created: latency-svc-fzz5p Mar 23 00:10:56.270: INFO: Got endpoints: latency-svc-fzz5p [791.066153ms] Mar 23 00:10:56.278: INFO: Created: latency-svc-4rr5q Mar 23 00:10:56.302: INFO: Got endpoints: latency-svc-4rr5q [809.66934ms] Mar 23 00:10:56.331: INFO: Created: latency-svc-dwqnz Mar 23 00:10:56.343: INFO: Got endpoints: latency-svc-dwqnz [813.886344ms] Mar 23 00:10:56.451: INFO: Created: latency-svc-q7p6r Mar 23 00:10:56.458: INFO: Got endpoints: latency-svc-q7p6r [839.247675ms] Mar 23 00:10:56.477: INFO: Created: latency-svc-jl24s Mar 23 00:10:56.494: INFO: Got endpoints: latency-svc-jl24s [856.414012ms] Mar 23 00:10:56.519: INFO: Created: latency-svc-j97rt Mar 23 00:10:56.536: INFO: Got endpoints: latency-svc-j97rt [853.097962ms] Mar 23 00:10:56.588: INFO: Created: latency-svc-txz2h Mar 23 00:10:56.607: INFO: Created: latency-svc-9ffl9 Mar 23 00:10:56.607: INFO: Got endpoints: latency-svc-txz2h [852.996654ms] Mar 23 00:10:56.620: INFO: Got endpoints: latency-svc-9ffl9 [857.437063ms] Mar 23 00:10:56.640: INFO: Created: latency-svc-6ghlh Mar 23 00:10:56.656: INFO: Got endpoints: latency-svc-6ghlh [851.926587ms] Mar 23 00:10:56.725: INFO: Created: latency-svc-dq4n8 Mar 23 00:10:56.733: INFO: Got endpoints: latency-svc-dq4n8 [846.436303ms] Mar 23 00:10:56.769: INFO: Created: latency-svc-l7shk Mar 23 00:10:56.787: INFO: Got endpoints: latency-svc-l7shk [825.582437ms] Mar 23 00:10:56.811: INFO: Created: latency-svc-9xd9q Mar 23 00:10:56.862: INFO: Got endpoints: latency-svc-9xd9q [832.252627ms] Mar 23 00:10:56.895: INFO: Created: latency-svc-ftgm6 Mar 23 00:10:56.913: INFO: Got endpoints: latency-svc-ftgm6 [826.611969ms] Mar 23 00:10:56.939: INFO: Created: latency-svc-kttqh Mar 23 00:10:57.024: INFO: Got endpoints: latency-svc-kttqh [873.575252ms] Mar 23 00:10:57.045: INFO: Created: latency-svc-rvdhz Mar 23 00:10:57.068: INFO: Got endpoints: latency-svc-rvdhz [879.590181ms] Mar 23 00:10:57.087: INFO: Created: latency-svc-xxwch Mar 23 00:10:57.158: INFO: Got endpoints: latency-svc-xxwch [888.308656ms] Mar 23 00:10:57.183: INFO: Created: latency-svc-27vw7 Mar 23 00:10:57.207: INFO: Got endpoints: latency-svc-27vw7 [904.598782ms] Mar 23 00:10:57.244: INFO: Created: latency-svc-sskgp Mar 23 00:10:57.275: INFO: Got endpoints: latency-svc-sskgp [931.921986ms] Mar 23 00:10:57.286: INFO: Created: latency-svc-mhgxj Mar 23 00:10:57.303: INFO: Got endpoints: latency-svc-mhgxj [844.967444ms] Mar 23 00:10:57.351: INFO: Created: latency-svc-6b9kw Mar 23 00:10:57.363: INFO: Got endpoints: latency-svc-6b9kw [868.5008ms] Mar 23 00:10:57.407: INFO: Created: latency-svc-x5bnq Mar 23 00:10:57.429: INFO: Created: latency-svc-88ksr Mar 23 00:10:57.429: INFO: Got endpoints: latency-svc-x5bnq [892.996946ms] Mar 23 00:10:57.440: INFO: Got endpoints: latency-svc-88ksr [833.135634ms] Mar 23 00:10:57.459: INFO: Created: latency-svc-sqd9q Mar 23 00:10:57.471: INFO: Got endpoints: latency-svc-sqd9q [850.358635ms] Mar 23 00:10:57.490: INFO: Created: latency-svc-4tdqj Mar 23 00:10:57.499: INFO: Got endpoints: latency-svc-4tdqj [843.001835ms] Mar 23 00:10:57.533: INFO: Created: latency-svc-6qjrt Mar 23 00:10:57.563: INFO: Created: latency-svc-dc9h4 Mar 23 00:10:57.563: INFO: Got endpoints: latency-svc-6qjrt [830.53021ms] Mar 23 00:10:57.584: INFO: Got endpoints: latency-svc-dc9h4 [796.873502ms] Mar 23 00:10:57.659: INFO: Created: latency-svc-ssw5k Mar 23 00:10:57.681: INFO: Created: latency-svc-hp2fq Mar 23 00:10:57.681: INFO: Got endpoints: latency-svc-ssw5k [818.896301ms] Mar 23 00:10:57.697: INFO: Got endpoints: latency-svc-hp2fq [784.749891ms] Mar 23 00:10:57.719: INFO: Created: latency-svc-9q99c Mar 23 00:10:57.727: INFO: Got endpoints: latency-svc-9q99c [703.392595ms] Mar 23 00:10:57.742: INFO: Created: latency-svc-jmg52 Mar 23 00:10:57.751: INFO: Got endpoints: latency-svc-jmg52 [683.148164ms] Mar 23 00:10:57.785: INFO: Created: latency-svc-47q6h Mar 23 00:10:57.788: INFO: Got endpoints: latency-svc-47q6h [629.62799ms] Mar 23 00:10:57.813: INFO: Created: latency-svc-ldmnn Mar 23 00:10:57.830: INFO: Got endpoints: latency-svc-ldmnn [623.340528ms] Mar 23 00:10:57.873: INFO: Created: latency-svc-9h8d9 Mar 23 00:10:57.904: INFO: Got endpoints: latency-svc-9h8d9 [628.97362ms] Mar 23 00:10:57.916: INFO: Created: latency-svc-pgzgg Mar 23 00:10:57.932: INFO: Got endpoints: latency-svc-pgzgg [629.161128ms] Mar 23 00:10:57.953: INFO: Created: latency-svc-7vqqf Mar 23 00:10:57.968: INFO: Got endpoints: latency-svc-7vqqf [605.309257ms] Mar 23 00:10:57.988: INFO: Created: latency-svc-bqd2c Mar 23 00:10:58.004: INFO: Got endpoints: latency-svc-bqd2c [574.608545ms] Mar 23 00:10:58.048: INFO: Created: latency-svc-8bg4q Mar 23 00:10:58.052: INFO: Got endpoints: latency-svc-8bg4q [612.008421ms] Mar 23 00:10:58.071: INFO: Created: latency-svc-xbnk8 Mar 23 00:10:58.087: INFO: Got endpoints: latency-svc-xbnk8 [616.03494ms] Mar 23 00:10:58.107: INFO: Created: latency-svc-rd6m6 Mar 23 00:10:58.123: INFO: Got endpoints: latency-svc-rd6m6 [623.744358ms] Mar 23 00:10:58.144: INFO: Created: latency-svc-4rjtp Mar 23 00:10:58.180: INFO: Got endpoints: latency-svc-4rjtp [616.486346ms] Mar 23 00:10:58.192: INFO: Created: latency-svc-p2gnk Mar 23 00:10:58.207: INFO: Got endpoints: latency-svc-p2gnk [623.01872ms] Mar 23 00:10:58.222: INFO: Created: latency-svc-8dv89 Mar 23 00:10:58.237: INFO: Got endpoints: latency-svc-8dv89 [555.72946ms] Mar 23 00:10:58.252: INFO: Created: latency-svc-8l72v Mar 23 00:10:58.267: INFO: Got endpoints: latency-svc-8l72v [569.373756ms] Mar 23 00:10:58.306: INFO: Created: latency-svc-qnvvt Mar 23 00:10:58.323: INFO: Created: latency-svc-9v59f Mar 23 00:10:58.324: INFO: Got endpoints: latency-svc-qnvvt [596.048057ms] Mar 23 00:10:58.333: INFO: Got endpoints: latency-svc-9v59f [581.601301ms] Mar 23 00:10:58.360: INFO: Created: latency-svc-hrlp2 Mar 23 00:10:58.376: INFO: Got endpoints: latency-svc-hrlp2 [587.423456ms] Mar 23 00:10:58.397: INFO: Created: latency-svc-7zffp Mar 23 00:10:58.449: INFO: Got endpoints: latency-svc-7zffp [618.852933ms] Mar 23 00:10:58.451: INFO: Created: latency-svc-f4xvv Mar 23 00:10:58.737: INFO: Got endpoints: latency-svc-f4xvv [832.611402ms] Mar 23 00:10:58.740: INFO: Created: latency-svc-cxsdx Mar 23 00:10:59.006: INFO: Got endpoints: latency-svc-cxsdx [1.073647408s] Mar 23 00:10:59.032: INFO: Created: latency-svc-kd779 Mar 23 00:10:59.047: INFO: Got endpoints: latency-svc-kd779 [1.078777248s] Mar 23 00:10:59.084: INFO: Created: latency-svc-npxcf Mar 23 00:10:59.099: INFO: Got endpoints: latency-svc-npxcf [1.095408711s] Mar 23 00:10:59.120: INFO: Created: latency-svc-9qtv4 Mar 23 00:10:59.136: INFO: Got endpoints: latency-svc-9qtv4 [1.083283568s] Mar 23 00:10:59.216: INFO: Created: latency-svc-vkngh Mar 23 00:10:59.266: INFO: Created: latency-svc-6tq4z Mar 23 00:10:59.266: INFO: Got endpoints: latency-svc-vkngh [1.179283294s] Mar 23 00:10:59.279: INFO: Got endpoints: latency-svc-6tq4z [1.155855589s] Mar 23 00:10:59.295: INFO: Created: latency-svc-cm2tz Mar 23 00:10:59.309: INFO: Got endpoints: latency-svc-cm2tz [1.128969162s] Mar 23 00:10:59.354: INFO: Created: latency-svc-mtszf Mar 23 00:10:59.378: INFO: Got endpoints: latency-svc-mtszf [1.171648237s] Mar 23 00:10:59.379: INFO: Created: latency-svc-kf2pd Mar 23 00:10:59.408: INFO: Got endpoints: latency-svc-kf2pd [1.170936405s] Mar 23 00:10:59.438: INFO: Created: latency-svc-czg8n Mar 23 00:10:59.447: INFO: Got endpoints: latency-svc-czg8n [1.180539853s] Mar 23 00:10:59.498: INFO: Created: latency-svc-9dc46 Mar 23 00:10:59.506: INFO: Got endpoints: latency-svc-9dc46 [1.181998853s] Mar 23 00:10:59.530: INFO: Created: latency-svc-45pbb Mar 23 00:10:59.544: INFO: Got endpoints: latency-svc-45pbb [1.210957657s] Mar 23 00:10:59.570: INFO: Created: latency-svc-6dxhx Mar 23 00:10:59.586: INFO: Got endpoints: latency-svc-6dxhx [1.210106024s] Mar 23 00:10:59.659: INFO: Created: latency-svc-ttbd6 Mar 23 00:10:59.698: INFO: Got endpoints: latency-svc-ttbd6 [1.248373545s] Mar 23 00:10:59.698: INFO: Created: latency-svc-qbjjq Mar 23 00:10:59.712: INFO: Got endpoints: latency-svc-qbjjq [974.80301ms] Mar 23 00:10:59.727: INFO: Created: latency-svc-z2xlz Mar 23 00:10:59.741: INFO: Got endpoints: latency-svc-z2xlz [735.428459ms] Mar 23 00:10:59.758: INFO: Created: latency-svc-845cp Mar 23 00:10:59.809: INFO: Got endpoints: latency-svc-845cp [761.686558ms] Mar 23 00:10:59.840: INFO: Created: latency-svc-cgqrj Mar 23 00:10:59.854: INFO: Got endpoints: latency-svc-cgqrj [755.128824ms] Mar 23 00:10:59.872: INFO: Created: latency-svc-x4nh4 Mar 23 00:10:59.890: INFO: Got endpoints: latency-svc-x4nh4 [754.473568ms] Mar 23 00:10:59.940: INFO: Created: latency-svc-np4vh Mar 23 00:10:59.943: INFO: Got endpoints: latency-svc-np4vh [676.992784ms] Mar 23 00:10:59.972: INFO: Created: latency-svc-twq44 Mar 23 00:10:59.986: INFO: Got endpoints: latency-svc-twq44 [707.381117ms] Mar 23 00:11:00.008: INFO: Created: latency-svc-m6z9j Mar 23 00:11:00.022: INFO: Got endpoints: latency-svc-m6z9j [712.77511ms] Mar 23 00:11:00.072: INFO: Created: latency-svc-lmggv Mar 23 00:11:00.086: INFO: Got endpoints: latency-svc-lmggv [707.406965ms] Mar 23 00:11:00.111: INFO: Created: latency-svc-rnj69 Mar 23 00:11:00.124: INFO: Got endpoints: latency-svc-rnj69 [716.177062ms] Mar 23 00:11:00.160: INFO: Created: latency-svc-tgkcr Mar 23 00:11:00.186: INFO: Got endpoints: latency-svc-tgkcr [738.321962ms] Mar 23 00:11:00.189: INFO: Created: latency-svc-x2lwj Mar 23 00:11:00.218: INFO: Got endpoints: latency-svc-x2lwj [712.349648ms] Mar 23 00:11:00.254: INFO: Created: latency-svc-ftppg Mar 23 00:11:00.262: INFO: Got endpoints: latency-svc-ftppg [718.277471ms] Mar 23 00:11:00.278: INFO: Created: latency-svc-6mqxv Mar 23 00:11:00.329: INFO: Got endpoints: latency-svc-6mqxv [743.538144ms] Mar 23 00:11:00.331: INFO: Created: latency-svc-gwkdl Mar 23 00:11:00.357: INFO: Got endpoints: latency-svc-gwkdl [659.846051ms] Mar 23 00:11:00.394: INFO: Created: latency-svc-8hsr2 Mar 23 00:11:00.405: INFO: Got endpoints: latency-svc-8hsr2 [693.300543ms] Mar 23 00:11:00.427: INFO: Created: latency-svc-2tb6k Mar 23 00:11:00.461: INFO: Got endpoints: latency-svc-2tb6k [719.630293ms] Mar 23 00:11:00.482: INFO: Created: latency-svc-hwv9w Mar 23 00:11:00.506: INFO: Got endpoints: latency-svc-hwv9w [696.817445ms] Mar 23 00:11:00.538: INFO: Created: latency-svc-mn8t9 Mar 23 00:11:00.555: INFO: Got endpoints: latency-svc-mn8t9 [700.505786ms] Mar 23 00:11:00.606: INFO: Created: latency-svc-w8q6p Mar 23 00:11:00.615: INFO: Got endpoints: latency-svc-w8q6p [724.192718ms] Mar 23 00:11:00.633: INFO: Created: latency-svc-s99hg Mar 23 00:11:00.655: INFO: Got endpoints: latency-svc-s99hg [712.202317ms] Mar 23 00:11:00.686: INFO: Created: latency-svc-lcxgv Mar 23 00:11:00.699: INFO: Got endpoints: latency-svc-lcxgv [712.531563ms] Mar 23 00:11:00.744: INFO: Created: latency-svc-nmfnk Mar 23 00:11:00.748: INFO: Got endpoints: latency-svc-nmfnk [725.814178ms] Mar 23 00:11:00.765: INFO: Created: latency-svc-9zss2 Mar 23 00:11:00.790: INFO: Got endpoints: latency-svc-9zss2 [704.347815ms] Mar 23 00:11:00.820: INFO: Created: latency-svc-4hvm7 Mar 23 00:11:00.838: INFO: Got endpoints: latency-svc-4hvm7 [713.561141ms] Mar 23 00:11:00.887: INFO: Created: latency-svc-jjtcf Mar 23 00:11:01.090: INFO: Got endpoints: latency-svc-jjtcf [904.567506ms] Mar 23 00:11:01.106: INFO: Created: latency-svc-z9czv Mar 23 00:11:01.492: INFO: Got endpoints: latency-svc-z9czv [1.273931316s] Mar 23 00:11:01.521: INFO: Created: latency-svc-9z2kn Mar 23 00:11:01.533: INFO: Got endpoints: latency-svc-9z2kn [1.270196583s] Mar 23 00:11:01.636: INFO: Created: latency-svc-g72bk Mar 23 00:11:01.652: INFO: Got endpoints: latency-svc-g72bk [1.322802007s] Mar 23 00:11:01.652: INFO: Created: latency-svc-qwvxp Mar 23 00:11:01.675: INFO: Got endpoints: latency-svc-qwvxp [1.317432351s] Mar 23 00:11:01.700: INFO: Created: latency-svc-cdb8r Mar 23 00:11:01.719: INFO: Got endpoints: latency-svc-cdb8r [1.313870707s] Mar 23 00:11:01.779: INFO: Created: latency-svc-8q6qd Mar 23 00:11:01.807: INFO: Created: latency-svc-xhr7z Mar 23 00:11:01.807: INFO: Got endpoints: latency-svc-8q6qd [1.345750031s] Mar 23 00:11:01.819: INFO: Got endpoints: latency-svc-xhr7z [1.313438313s] Mar 23 00:11:01.857: INFO: Created: latency-svc-hqkfh Mar 23 00:11:01.922: INFO: Got endpoints: latency-svc-hqkfh [1.366945596s] Mar 23 00:11:01.946: INFO: Created: latency-svc-hqgdj Mar 23 00:11:01.957: INFO: Got endpoints: latency-svc-hqgdj [1.342264099s] Mar 23 00:11:01.970: INFO: Created: latency-svc-9gh96 Mar 23 00:11:01.981: INFO: Got endpoints: latency-svc-9gh96 [1.325544215s] Mar 23 00:11:01.999: INFO: Created: latency-svc-lgghz Mar 23 00:11:02.060: INFO: Got endpoints: latency-svc-lgghz [1.361132085s] Mar 23 00:11:02.062: INFO: Created: latency-svc-2mdlv Mar 23 00:11:02.065: INFO: Got endpoints: latency-svc-2mdlv [1.317785209s] Mar 23 00:11:02.084: INFO: Created: latency-svc-cr6hg Mar 23 00:11:02.102: INFO: Got endpoints: latency-svc-cr6hg [1.311238377s] Mar 23 00:11:02.126: INFO: Created: latency-svc-gmmcx Mar 23 00:11:02.156: INFO: Got endpoints: latency-svc-gmmcx [1.317823007s] Mar 23 00:11:02.196: INFO: Created: latency-svc-vhpzf Mar 23 00:11:02.209: INFO: Got endpoints: latency-svc-vhpzf [1.119125379s] Mar 23 00:11:02.230: INFO: Created: latency-svc-qhkmw Mar 23 00:11:02.239: INFO: Got endpoints: latency-svc-qhkmw [747.22582ms] Mar 23 00:11:02.239: INFO: Latencies: [39.756234ms 82.315216ms 188.868317ms 202.604555ms 232.289526ms 262.342213ms 315.605539ms 328.188097ms 358.152906ms 387.354138ms 435.233055ms 470.999519ms 502.122236ms 533.549327ms 533.682721ms 536.375083ms 539.661391ms 555.72946ms 568.716226ms 569.373756ms 572.840494ms 574.608545ms 574.866015ms 576.350154ms 580.162623ms 581.601301ms 587.160239ms 587.341571ms 587.423456ms 587.681043ms 590.490289ms 596.048057ms 596.859653ms 599.113514ms 600.336822ms 605.309257ms 606.53027ms 612.008421ms 614.527986ms 616.03494ms 616.0489ms 616.486346ms 618.852933ms 623.01872ms 623.340528ms 623.744358ms 628.97362ms 629.161128ms 629.62799ms 633.193914ms 634.220209ms 640.040319ms 641.520569ms 642.625281ms 652.535498ms 658.990943ms 659.106491ms 659.846051ms 667.94857ms 667.974558ms 673.701096ms 675.447576ms 675.894367ms 676.992784ms 677.106685ms 677.133236ms 677.430564ms 681.916504ms 682.051123ms 682.375458ms 683.148164ms 684.679392ms 687.220077ms 687.610717ms 689.044072ms 689.377429ms 691.674122ms 693.023519ms 693.300543ms 694.240395ms 695.533588ms 696.817445ms 700.505786ms 700.830288ms 703.011804ms 703.392595ms 704.347815ms 705.649018ms 707.381117ms 707.406965ms 712.202317ms 712.336678ms 712.349648ms 712.531563ms 712.77511ms 713.561141ms 714.203381ms 715.082885ms 716.177062ms 716.232602ms 718.277471ms 719.630293ms 719.95046ms 724.012176ms 724.192718ms 725.090578ms 725.814178ms 725.903684ms 729.455395ms 729.950176ms 735.428459ms 735.693734ms 737.130136ms 737.32105ms 738.321962ms 743.538144ms 743.75481ms 747.22582ms 748.323445ms 752.745886ms 754.473568ms 755.128824ms 756.190724ms 760.593971ms 761.686558ms 763.80751ms 771.369617ms 773.203137ms 777.270694ms 777.288592ms 780.143205ms 784.749891ms 791.066153ms 796.873502ms 798.539863ms 808.811663ms 809.66934ms 810.040393ms 813.202194ms 813.886344ms 814.876777ms 815.495164ms 818.896301ms 822.951882ms 823.723433ms 825.582437ms 826.611969ms 827.358816ms 830.53021ms 832.252627ms 832.611402ms 833.135634ms 839.247675ms 843.001835ms 844.967444ms 846.436303ms 850.358635ms 851.926587ms 852.996654ms 853.097962ms 856.414012ms 857.437063ms 868.5008ms 873.575252ms 879.590181ms 888.308656ms 892.996946ms 904.567506ms 904.598782ms 931.921986ms 974.80301ms 1.073647408s 1.078777248s 1.083283568s 1.095408711s 1.119125379s 1.128969162s 1.155855589s 1.170936405s 1.171648237s 1.179283294s 1.180539853s 1.181998853s 1.210106024s 1.210957657s 1.248373545s 1.270196583s 1.273931316s 1.311238377s 1.313438313s 1.313870707s 1.317432351s 1.317785209s 1.317823007s 1.322802007s 1.325544215s 1.342264099s 1.345750031s 1.361132085s 1.366945596s] Mar 23 00:11:02.239: INFO: 50 %ile: 718.277471ms Mar 23 00:11:02.239: INFO: 90 %ile: 1.179283294s Mar 23 00:11:02.239: INFO: 99 %ile: 1.361132085s Mar 23 00:11:02.239: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:11:02.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3714" for this suite. • [SLOW TEST:14.809 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":119,"skipped":1756,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:11:02.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:11:02.712: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:11:04.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519062, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519062, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519062, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519062, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:11:07.771: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:11:07.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2341" for this suite. STEP: Destroying namespace "webhook-2341-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.009 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":120,"skipped":1844,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:11:08.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 23 00:11:13.072: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:11:13.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8823" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":1846,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:11:13.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 23 00:11:14.173: INFO: Pod name wrapped-volume-race-d2eaa880-d71e-4296-b488-8be63700ab7d: Found 0 pods out of 5 Mar 23 00:11:19.203: INFO: Pod name wrapped-volume-race-d2eaa880-d71e-4296-b488-8be63700ab7d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d2eaa880-d71e-4296-b488-8be63700ab7d in namespace emptydir-wrapper-9562, will wait for the garbage collector to delete the pods Mar 23 00:11:31.356: INFO: Deleting ReplicationController wrapped-volume-race-d2eaa880-d71e-4296-b488-8be63700ab7d took: 25.147999ms Mar 23 00:11:31.756: INFO: Terminating ReplicationController wrapped-volume-race-d2eaa880-d71e-4296-b488-8be63700ab7d pods took: 400.218802ms STEP: Creating RC which spawns configmap-volume pods Mar 23 00:11:43.487: INFO: Pod name wrapped-volume-race-e451e2eb-6e6e-4af2-bb01-a29089e444a1: Found 0 pods out of 5 Mar 23 00:11:48.495: INFO: Pod name wrapped-volume-race-e451e2eb-6e6e-4af2-bb01-a29089e444a1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e451e2eb-6e6e-4af2-bb01-a29089e444a1 in namespace emptydir-wrapper-9562, will wait for the garbage collector to delete the pods Mar 23 00:12:00.592: INFO: Deleting ReplicationController wrapped-volume-race-e451e2eb-6e6e-4af2-bb01-a29089e444a1 took: 8.041143ms Mar 23 00:12:00.992: INFO: Terminating ReplicationController wrapped-volume-race-e451e2eb-6e6e-4af2-bb01-a29089e444a1 pods took: 400.267601ms STEP: Creating RC which spawns configmap-volume pods Mar 23 00:12:12.834: INFO: Pod name wrapped-volume-race-80cd2e19-de98-48ea-9e43-3e5fa845930a: Found 0 pods out of 5 Mar 23 00:12:17.840: INFO: Pod name wrapped-volume-race-80cd2e19-de98-48ea-9e43-3e5fa845930a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-80cd2e19-de98-48ea-9e43-3e5fa845930a in namespace emptydir-wrapper-9562, will wait for the garbage collector to delete the pods Mar 23 00:12:29.926: INFO: Deleting ReplicationController wrapped-volume-race-80cd2e19-de98-48ea-9e43-3e5fa845930a took: 7.691402ms Mar 23 00:12:30.226: INFO: Terminating ReplicationController wrapped-volume-race-80cd2e19-de98-48ea-9e43-3e5fa845930a pods took: 300.321358ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:12:44.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9562" for this suite. • [SLOW TEST:91.317 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":122,"skipped":1851,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:12:44.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:12:44.669: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 23 00:12:49.678: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 23 00:12:49.678: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 23 00:12:51.681: INFO: Creating deployment "test-rollover-deployment" Mar 23 00:12:51.704: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 23 00:12:53.711: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 23 00:12:53.716: INFO: Ensure that both replica sets have 1 created replica Mar 23 00:12:53.721: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 23 00:12:53.727: INFO: Updating deployment test-rollover-deployment Mar 23 00:12:53.727: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 23 00:12:55.738: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 23 00:12:55.745: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 23 00:12:55.751: INFO: all replica sets need to contain the pod-template-hash label Mar 23 00:12:55.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519173, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 00:12:57.760: INFO: all replica sets need to contain the pod-template-hash label Mar 23 00:12:57.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519176, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 00:12:59.760: INFO: all replica sets need to contain the pod-template-hash label Mar 23 00:12:59.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519176, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 00:13:01.758: INFO: all replica sets need to contain the pod-template-hash label Mar 23 00:13:01.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519176, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 00:13:03.759: INFO: all replica sets need to contain the pod-template-hash label Mar 23 00:13:03.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519176, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 00:13:05.759: INFO: all replica sets need to contain the pod-template-hash label Mar 23 00:13:05.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519176, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720519171, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 00:13:07.759: INFO: Mar 23 00:13:07.759: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 23 00:13:07.767: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8933 /apis/apps/v1/namespaces/deployment-8933/deployments/test-rollover-deployment e0a60e4b-7e3d-4678-b747-2aa6a4c3a4c3 2015410 2 2020-03-23 00:12:51 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000861048 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-23 00:12:51 +0000 UTC,LastTransitionTime:2020-03-23 00:12:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-03-23 00:13:06 +0000 UTC,LastTransitionTime:2020-03-23 00:12:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 23 00:13:07.771: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-8933 /apis/apps/v1/namespaces/deployment-8933/replicasets/test-rollover-deployment-78df7bc796 3fb484fd-4551-47c1-a44c-2d4d6189a3b2 2015398 2 2020-03-23 00:12:53 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e0a60e4b-7e3d-4678-b747-2aa6a4c3a4c3 0xc004d2a337 0xc004d2a338}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004d2a3a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 23 00:13:07.771: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 23 00:13:07.771: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8933 /apis/apps/v1/namespaces/deployment-8933/replicasets/test-rollover-controller e0b338ca-0e8c-43c7-89b2-e4ddefe9ef5a 2015408 2 2020-03-23 00:12:44 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e0a60e4b-7e3d-4678-b747-2aa6a4c3a4c3 0xc004d2a267 0xc004d2a268}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004d2a2c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 23 00:13:07.771: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-8933 /apis/apps/v1/namespaces/deployment-8933/replicasets/test-rollover-deployment-f6c94f66c ab165d4e-92f1-4a5f-8848-cfed9d1ec1e3 2015350 2 2020-03-23 00:12:51 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e0a60e4b-7e3d-4678-b747-2aa6a4c3a4c3 0xc004d2a410 0xc004d2a411}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004d2a488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 23 00:13:07.774: INFO: Pod "test-rollover-deployment-78df7bc796-v5dpw" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-v5dpw test-rollover-deployment-78df7bc796- deployment-8933 /api/v1/namespaces/deployment-8933/pods/test-rollover-deployment-78df7bc796-v5dpw d34d33bd-29ee-427b-b006-57db6bca679e 2015368 0 2020-03-23 00:12:53 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 3fb484fd-4551-47c1-a44c-2d4d6189a3b2 0xc0048666a7 0xc0048666a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2wzc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2wzc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2wzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:12:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:12:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:12:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:12:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.218,StartTime:2020-03-23 00:12:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 00:12:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://aa82fb3bca353f48ef0ded1ea856144bbf5a7da373158ce9ae90298730f040b1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:13:07.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8933" for this suite. • [SLOW TEST:23.241 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":123,"skipped":1854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:13:07.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Mar 23 00:13:07.850: INFO: Waiting up to 5m0s for pod "var-expansion-411451e9-701b-49e1-be44-0880a000ebec" in namespace "var-expansion-8508" to be "Succeeded or Failed" Mar 23 00:13:07.868: INFO: Pod "var-expansion-411451e9-701b-49e1-be44-0880a000ebec": Phase="Pending", Reason="", readiness=false. Elapsed: 18.197181ms Mar 23 00:13:10.324: INFO: Pod "var-expansion-411451e9-701b-49e1-be44-0880a000ebec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474301554s Mar 23 00:13:12.328: INFO: Pod "var-expansion-411451e9-701b-49e1-be44-0880a000ebec": Phase="Running", Reason="", readiness=true. Elapsed: 4.478543251s Mar 23 00:13:14.332: INFO: Pod "var-expansion-411451e9-701b-49e1-be44-0880a000ebec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.482380616s STEP: Saw pod success Mar 23 00:13:14.332: INFO: Pod "var-expansion-411451e9-701b-49e1-be44-0880a000ebec" satisfied condition "Succeeded or Failed" Mar 23 00:13:14.335: INFO: Trying to get logs from node latest-worker2 pod var-expansion-411451e9-701b-49e1-be44-0880a000ebec container dapi-container: STEP: delete the pod Mar 23 00:13:14.378: INFO: Waiting for pod var-expansion-411451e9-701b-49e1-be44-0880a000ebec to disappear Mar 23 00:13:14.389: INFO: Pod var-expansion-411451e9-701b-49e1-be44-0880a000ebec no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:13:14.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8508" for this suite. • [SLOW TEST:6.614 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":1878,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:13:14.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 23 00:13:14.480: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 23 00:13:14.498: INFO: Waiting for terminating namespaces to be deleted... Mar 23 00:13:14.503: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 23 00:13:14.516: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:13:14.516: INFO: Container kindnet-cni ready: true, restart count 0 Mar 23 00:13:14.516: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:13:14.516: INFO: Container kube-proxy ready: true, restart count 0 Mar 23 00:13:14.516: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 23 00:13:14.530: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:13:14.530: INFO: Container kindnet-cni ready: true, restart count 0 Mar 23 00:13:14.530: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:13:14.530: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fec68543731f76], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:13:15.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8383" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":125,"skipped":1946,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:13:15.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Mar 23 00:13:15.648: INFO: Waiting up to 5m0s for pod "var-expansion-30d618c4-78cb-439b-bc3d-3a88dadf3e44" in namespace "var-expansion-7066" to be "Succeeded or Failed" Mar 23 00:13:15.663: INFO: Pod "var-expansion-30d618c4-78cb-439b-bc3d-3a88dadf3e44": Phase="Pending", Reason="", readiness=false. Elapsed: 15.45722ms Mar 23 00:13:17.684: INFO: Pod "var-expansion-30d618c4-78cb-439b-bc3d-3a88dadf3e44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036699124s Mar 23 00:13:19.700: INFO: Pod "var-expansion-30d618c4-78cb-439b-bc3d-3a88dadf3e44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051744699s STEP: Saw pod success Mar 23 00:13:19.700: INFO: Pod "var-expansion-30d618c4-78cb-439b-bc3d-3a88dadf3e44" satisfied condition "Succeeded or Failed" Mar 23 00:13:19.702: INFO: Trying to get logs from node latest-worker pod var-expansion-30d618c4-78cb-439b-bc3d-3a88dadf3e44 container dapi-container: STEP: delete the pod Mar 23 00:13:19.755: INFO: Waiting for pod var-expansion-30d618c4-78cb-439b-bc3d-3a88dadf3e44 to disappear Mar 23 00:13:19.759: INFO: Pod var-expansion-30d618c4-78cb-439b-bc3d-3a88dadf3e44 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:13:19.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7066" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":1947,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:13:19.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0323 00:13:31.067267 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 23 00:13:31.067: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:13:31.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2668" for this suite. • [SLOW TEST:11.308 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":127,"skipped":2001,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:13:31.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:13:31.203: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b050081a-7aa3-4019-9452-4335d35be215", Controller:(*bool)(0xc0028e490a), BlockOwnerDeletion:(*bool)(0xc0028e490b)}} Mar 23 00:13:31.260: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6fba587c-0725-4bef-b562-6926f2d46a22", Controller:(*bool)(0xc0028e4ada), BlockOwnerDeletion:(*bool)(0xc0028e4adb)}} Mar 23 00:13:31.263: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1e232917-11a0-4e79-a5f8-f54541d6e511", Controller:(*bool)(0xc00054cef2), BlockOwnerDeletion:(*bool)(0xc00054cef3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:13:36.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3717" for this suite. • [SLOW TEST:5.473 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":128,"skipped":2003,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:13:36.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5149 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5149 STEP: Creating statefulset with conflicting port in namespace statefulset-5149 STEP: Waiting until pod test-pod will start running in namespace statefulset-5149 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5149 Mar 23 00:13:41.296: INFO: Observed stateful pod in namespace: statefulset-5149, name: ss-0, uid: d0845b69-b898-47d8-8a24-9e4386408d2e, status phase: Pending. Waiting for statefulset controller to delete. Mar 23 00:13:42.728: INFO: Observed stateful pod in namespace: statefulset-5149, name: ss-0, uid: d0845b69-b898-47d8-8a24-9e4386408d2e, status phase: Failed. Waiting for statefulset controller to delete. Mar 23 00:13:42.737: INFO: Observed stateful pod in namespace: statefulset-5149, name: ss-0, uid: d0845b69-b898-47d8-8a24-9e4386408d2e, status phase: Failed. Waiting for statefulset controller to delete. Mar 23 00:13:42.776: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5149 STEP: Removing pod with conflicting port in namespace statefulset-5149 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5149 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 23 00:13:56.956: INFO: Deleting all statefulset in ns statefulset-5149 Mar 23 00:13:56.959: INFO: Scaling statefulset ss to 0 Mar 23 00:14:17.006: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 00:14:17.008: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:14:17.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5149" for this suite. • [SLOW TEST:40.492 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":129,"skipped":2004,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:14:17.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:14:21.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9799" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2009,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:14:21.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-4a6f0dd8-1e18-4a7f-886c-fde203237d82 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:14:25.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-746" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:14:25.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:14:25.358: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:14:26.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6205" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":132,"skipped":2053,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:14:26.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-6bae535e-07bc-4a68-9047-07ba7b9bfba3 STEP: Creating configMap with name cm-test-opt-upd-55fe2299-aed1-4f29-8a5b-612542fa8368 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6bae535e-07bc-4a68-9047-07ba7b9bfba3 STEP: Updating configmap cm-test-opt-upd-55fe2299-aed1-4f29-8a5b-612542fa8368 STEP: Creating configMap with name cm-test-opt-create-14bd9c0b-6c15-4a48-9ebe-50ae2f5c8cc2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:15:53.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3950" for this suite. • [SLOW TEST:87.210 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2082,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:15:53.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:16:04.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5688" for this suite. • [SLOW TEST:11.140 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":134,"skipped":2088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:16:04.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Mar 23 00:16:04.797: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 23 00:16:04.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-138' Mar 23 00:16:07.789: INFO: stderr: "" Mar 23 00:16:07.789: INFO: stdout: "service/agnhost-slave created\n" Mar 23 00:16:07.789: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 23 00:16:07.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-138' Mar 23 00:16:08.051: INFO: stderr: "" Mar 23 00:16:08.051: INFO: stdout: "service/agnhost-master created\n" Mar 23 00:16:08.051: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 23 00:16:08.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-138' Mar 23 00:16:08.330: INFO: stderr: "" Mar 23 00:16:08.330: INFO: stdout: "service/frontend created\n" Mar 23 00:16:08.330: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 23 00:16:08.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-138' Mar 23 00:16:08.581: INFO: stderr: "" Mar 23 00:16:08.581: INFO: stdout: "deployment.apps/frontend created\n" Mar 23 00:16:08.582: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 23 00:16:08.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-138' Mar 23 00:16:08.904: INFO: stderr: "" Mar 23 00:16:08.904: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 23 00:16:08.904: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 23 00:16:08.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-138' Mar 23 00:16:09.144: INFO: stderr: "" Mar 23 00:16:09.145: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 23 00:16:09.145: INFO: Waiting for all frontend pods to be Running. Mar 23 00:16:19.195: INFO: Waiting for frontend to serve content. Mar 23 00:16:19.227: INFO: Trying to add a new entry to the guestbook. Mar 23 00:16:19.262: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 23 00:16:19.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-138' Mar 23 00:16:19.575: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 23 00:16:19.575: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 23 00:16:19.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-138' Mar 23 00:16:19.712: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 23 00:16:19.712: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 23 00:16:19.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-138' Mar 23 00:16:19.833: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 23 00:16:19.833: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 23 00:16:19.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-138' Mar 23 00:16:19.928: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 23 00:16:19.928: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 23 00:16:19.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-138' Mar 23 00:16:20.032: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 23 00:16:20.032: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 23 00:16:20.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-138' Mar 23 00:16:20.163: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 23 00:16:20.163: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:16:20.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-138" for this suite. • [SLOW TEST:15.426 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":135,"skipped":2113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:16:20.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 00:16:20.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac130984-762f-4e53-9887-f2419617f5fd" in namespace "projected-7837" to be "Succeeded or Failed" Mar 23 00:16:20.333: INFO: Pod "downwardapi-volume-ac130984-762f-4e53-9887-f2419617f5fd": Phase="Pending", Reason="", readiness=false. Elapsed: 25.329034ms Mar 23 00:16:22.338: INFO: Pod "downwardapi-volume-ac130984-762f-4e53-9887-f2419617f5fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029813083s Mar 23 00:16:24.342: INFO: Pod "downwardapi-volume-ac130984-762f-4e53-9887-f2419617f5fd": Phase="Running", Reason="", readiness=true. Elapsed: 4.033883661s Mar 23 00:16:26.346: INFO: Pod "downwardapi-volume-ac130984-762f-4e53-9887-f2419617f5fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038241805s STEP: Saw pod success Mar 23 00:16:26.346: INFO: Pod "downwardapi-volume-ac130984-762f-4e53-9887-f2419617f5fd" satisfied condition "Succeeded or Failed" Mar 23 00:16:26.349: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ac130984-762f-4e53-9887-f2419617f5fd container client-container: STEP: delete the pod Mar 23 00:16:26.397: INFO: Waiting for pod downwardapi-volume-ac130984-762f-4e53-9887-f2419617f5fd to disappear Mar 23 00:16:26.408: INFO: Pod downwardapi-volume-ac130984-762f-4e53-9887-f2419617f5fd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:16:26.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7837" for this suite. • [SLOW TEST:6.242 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2163,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:16:26.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 23 00:16:26.474: INFO: Waiting up to 5m0s for pod "pod-745aef7a-e674-4b60-a63c-1b20e69d2964" in namespace "emptydir-6768" to be "Succeeded or Failed" Mar 23 00:16:26.478: INFO: Pod "pod-745aef7a-e674-4b60-a63c-1b20e69d2964": Phase="Pending", Reason="", readiness=false. Elapsed: 3.917525ms Mar 23 00:16:28.482: INFO: Pod "pod-745aef7a-e674-4b60-a63c-1b20e69d2964": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00789309s Mar 23 00:16:30.486: INFO: Pod "pod-745aef7a-e674-4b60-a63c-1b20e69d2964": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01216071s STEP: Saw pod success Mar 23 00:16:30.486: INFO: Pod "pod-745aef7a-e674-4b60-a63c-1b20e69d2964" satisfied condition "Succeeded or Failed" Mar 23 00:16:30.489: INFO: Trying to get logs from node latest-worker pod pod-745aef7a-e674-4b60-a63c-1b20e69d2964 container test-container: STEP: delete the pod Mar 23 00:16:30.508: INFO: Waiting for pod pod-745aef7a-e674-4b60-a63c-1b20e69d2964 to disappear Mar 23 00:16:30.512: INFO: Pod pod-745aef7a-e674-4b60-a63c-1b20e69d2964 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:16:30.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6768" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2164,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:16:30.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:16:34.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6358" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2173,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:16:34.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:16:34.702: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:16:38.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1683" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2191,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:16:38.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 23 00:16:38.852: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-119 /api/v1/namespaces/watch-119/configmaps/e2e-watch-test-resource-version 4127e646-ac19-4c81-877e-8f7be4ce6012 2016885 0 2020-03-23 00:16:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:16:38.852: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-119 /api/v1/namespaces/watch-119/configmaps/e2e-watch-test-resource-version 4127e646-ac19-4c81-877e-8f7be4ce6012 2016886 0 2020-03-23 00:16:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:16:38.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-119" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":140,"skipped":2210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:16:38.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-1b48418f-e72c-47a0-bea2-78940c4aac0d [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:16:38.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-676" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":141,"skipped":2237,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:16:38.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 23 00:16:38.970: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:16:55.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5562" for this suite. • [SLOW TEST:16.793 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":142,"skipped":2249,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:16:55.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-b4s6v in namespace proxy-8374 I0323 00:16:55.789730 7 runners.go:190] Created replication controller with name: proxy-service-b4s6v, namespace: proxy-8374, replica count: 1 I0323 00:16:56.840122 7 runners.go:190] proxy-service-b4s6v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 00:16:57.840338 7 runners.go:190] proxy-service-b4s6v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 00:16:58.840584 7 runners.go:190] proxy-service-b4s6v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 00:16:59.840756 7 runners.go:190] proxy-service-b4s6v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0323 00:17:00.840970 7 runners.go:190] proxy-service-b4s6v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0323 00:17:01.841379 7 runners.go:190] proxy-service-b4s6v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0323 00:17:02.841573 7 runners.go:190] proxy-service-b4s6v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0323 00:17:03.841806 7 runners.go:190] proxy-service-b4s6v Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 23 00:17:03.846: INFO: setup took 8.099220389s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 23 00:17:03.851: INFO: (0) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 5.01889ms) Mar 23 00:17:03.852: INFO: (0) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 4.779623ms) Mar 23 00:17:03.852: INFO: (0) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 5.684676ms) Mar 23 00:17:03.852: INFO: (0) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 5.365805ms) Mar 23 00:17:03.853: INFO: (0) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 4.197366ms) Mar 23 00:17:03.853: INFO: (0) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.877262ms) Mar 23 00:17:03.855: INFO: (0) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 6.77017ms) Mar 23 00:17:03.855: INFO: (0) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 7.870524ms) Mar 23 00:17:03.855: INFO: (0) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 8.358473ms) Mar 23 00:17:03.855: INFO: (0) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 8.285961ms) Mar 23 00:17:03.855: INFO: (0) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 7.055272ms) Mar 23 00:17:03.860: INFO: (0) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 13.896243ms) Mar 23 00:17:03.860: INFO: (0) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test (200; 5.01741ms) Mar 23 00:17:03.866: INFO: (1) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 4.987188ms) Mar 23 00:17:03.866: INFO: (1) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 5.050582ms) Mar 23 00:17:03.866: INFO: (1) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 5.172354ms) Mar 23 00:17:03.866: INFO: (1) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 5.204845ms) Mar 23 00:17:03.866: INFO: (1) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 5.264638ms) Mar 23 00:17:03.866: INFO: (1) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 5.222132ms) Mar 23 00:17:03.866: INFO: (1) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 5.312491ms) Mar 23 00:17:03.866: INFO: (1) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 5.494165ms) Mar 23 00:17:03.867: INFO: (1) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 5.489543ms) Mar 23 00:17:03.867: INFO: (1) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 5.648982ms) Mar 23 00:17:03.867: INFO: (1) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: ... (200; 5.810105ms) Mar 23 00:17:03.870: INFO: (2) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.37505ms) Mar 23 00:17:03.870: INFO: (2) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 3.459985ms) Mar 23 00:17:03.870: INFO: (2) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 3.38691ms) Mar 23 00:17:03.870: INFO: (2) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 3.611107ms) Mar 23 00:17:03.870: INFO: (2) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 3.597268ms) Mar 23 00:17:03.871: INFO: (2) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 3.607795ms) Mar 23 00:17:03.871: INFO: (2) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 3.84926ms) Mar 23 00:17:03.871: INFO: (2) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 4.463711ms) Mar 23 00:17:03.872: INFO: (2) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 4.713231ms) Mar 23 00:17:03.872: INFO: (2) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 4.672323ms) Mar 23 00:17:03.872: INFO: (2) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 4.726314ms) Mar 23 00:17:03.872: INFO: (2) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 4.971648ms) Mar 23 00:17:03.872: INFO: (2) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test<... (200; 5.321863ms) Mar 23 00:17:03.872: INFO: (2) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 5.376365ms) Mar 23 00:17:03.877: INFO: (3) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 4.164493ms) Mar 23 00:17:03.877: INFO: (3) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.926348ms) Mar 23 00:17:03.877: INFO: (3) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 5.004097ms) Mar 23 00:17:03.877: INFO: (3) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 5.020812ms) Mar 23 00:17:03.877: INFO: (3) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 5.051711ms) Mar 23 00:17:03.877: INFO: (3) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 5.031371ms) Mar 23 00:17:03.877: INFO: (3) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 5.099927ms) Mar 23 00:17:03.877: INFO: (3) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 5.02697ms) Mar 23 00:17:03.877: INFO: (3) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: ... (200; 7.557817ms) Mar 23 00:17:03.889: INFO: (4) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 9.280124ms) Mar 23 00:17:03.889: INFO: (4) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 9.277343ms) Mar 23 00:17:03.889: INFO: (4) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 9.336329ms) Mar 23 00:17:03.889: INFO: (4) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 9.344341ms) Mar 23 00:17:03.889: INFO: (4) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 9.350153ms) Mar 23 00:17:03.890: INFO: (4) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 9.493664ms) Mar 23 00:17:03.890: INFO: (4) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 9.54786ms) Mar 23 00:17:03.890: INFO: (4) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 9.696259ms) Mar 23 00:17:03.890: INFO: (4) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 9.734822ms) Mar 23 00:17:03.890: INFO: (4) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 9.940566ms) Mar 23 00:17:03.890: INFO: (4) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: ... (200; 3.77025ms) Mar 23 00:17:03.894: INFO: (5) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.813275ms) Mar 23 00:17:03.894: INFO: (5) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 3.868402ms) Mar 23 00:17:03.894: INFO: (5) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 3.97107ms) Mar 23 00:17:03.894: INFO: (5) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 3.934907ms) Mar 23 00:17:03.894: INFO: (5) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 3.979205ms) Mar 23 00:17:03.894: INFO: (5) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 3.939307ms) Mar 23 00:17:03.894: INFO: (5) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.022659ms) Mar 23 00:17:03.894: INFO: (5) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 4.053566ms) Mar 23 00:17:03.894: INFO: (5) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 4.100952ms) Mar 23 00:17:03.894: INFO: (5) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 4.15967ms) Mar 23 00:17:03.897: INFO: (6) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 2.91297ms) Mar 23 00:17:03.898: INFO: (6) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: ... (200; 4.159256ms) Mar 23 00:17:03.899: INFO: (6) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 4.221846ms) Mar 23 00:17:03.899: INFO: (6) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 4.244828ms) Mar 23 00:17:03.899: INFO: (6) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.29703ms) Mar 23 00:17:03.899: INFO: (6) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 4.28616ms) Mar 23 00:17:03.899: INFO: (6) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 4.304239ms) Mar 23 00:17:03.899: INFO: (6) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 4.58246ms) Mar 23 00:17:03.899: INFO: (6) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 4.674604ms) Mar 23 00:17:03.900: INFO: (6) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 5.255553ms) Mar 23 00:17:03.900: INFO: (6) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 5.313717ms) Mar 23 00:17:03.900: INFO: (6) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 5.258596ms) Mar 23 00:17:03.900: INFO: (6) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 5.313021ms) Mar 23 00:17:03.900: INFO: (6) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 5.54078ms) Mar 23 00:17:03.903: INFO: (7) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 3.285123ms) Mar 23 00:17:03.903: INFO: (7) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 3.334168ms) Mar 23 00:17:03.903: INFO: (7) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.443732ms) Mar 23 00:17:03.904: INFO: (7) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 3.342832ms) Mar 23 00:17:03.904: INFO: (7) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: ... (200; 4.473705ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 4.553278ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 4.730346ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 4.80317ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 4.8516ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 4.793299ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 4.751941ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 4.799189ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 4.822068ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.80741ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 4.886855ms) Mar 23 00:17:03.911: INFO: (8) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test (200; 5.160583ms) Mar 23 00:17:03.913: INFO: (9) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test<... (200; 3.455583ms) Mar 23 00:17:03.915: INFO: (9) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 3.472159ms) Mar 23 00:17:03.915: INFO: (9) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.497167ms) Mar 23 00:17:03.915: INFO: (9) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 3.540268ms) Mar 23 00:17:03.915: INFO: (9) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.703165ms) Mar 23 00:17:03.915: INFO: (9) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 4.07148ms) Mar 23 00:17:03.915: INFO: (9) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 4.278639ms) Mar 23 00:17:03.915: INFO: (9) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 4.353795ms) Mar 23 00:17:03.915: INFO: (9) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 4.370662ms) Mar 23 00:17:03.916: INFO: (9) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 4.379359ms) Mar 23 00:17:03.916: INFO: (9) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 4.48034ms) Mar 23 00:17:03.916: INFO: (9) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 4.695651ms) Mar 23 00:17:03.916: INFO: (9) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.65026ms) Mar 23 00:17:03.916: INFO: (9) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 4.818777ms) Mar 23 00:17:03.920: INFO: (10) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 4.027587ms) Mar 23 00:17:03.920: INFO: (10) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 4.005453ms) Mar 23 00:17:03.920: INFO: (10) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 4.082121ms) Mar 23 00:17:03.920: INFO: (10) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.092136ms) Mar 23 00:17:03.920: INFO: (10) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.101516ms) Mar 23 00:17:03.920: INFO: (10) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 4.174371ms) Mar 23 00:17:03.920: INFO: (10) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test (200; 4.280743ms) Mar 23 00:17:03.920: INFO: (10) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 4.3349ms) Mar 23 00:17:03.921: INFO: (10) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 4.769602ms) Mar 23 00:17:03.921: INFO: (10) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 5.087792ms) Mar 23 00:17:03.921: INFO: (10) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 5.100693ms) Mar 23 00:17:03.921: INFO: (10) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 5.174227ms) Mar 23 00:17:03.921: INFO: (10) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 5.270259ms) Mar 23 00:17:03.921: INFO: (10) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 5.363158ms) Mar 23 00:17:03.921: INFO: (10) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 5.315003ms) Mar 23 00:17:03.924: INFO: (11) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 2.445009ms) Mar 23 00:17:03.924: INFO: (11) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 2.502963ms) Mar 23 00:17:03.924: INFO: (11) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 2.5633ms) Mar 23 00:17:03.926: INFO: (11) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.282069ms) Mar 23 00:17:03.926: INFO: (11) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 4.407233ms) Mar 23 00:17:03.926: INFO: (11) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.442957ms) Mar 23 00:17:03.926: INFO: (11) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 4.468371ms) Mar 23 00:17:03.926: INFO: (11) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test (200; 4.679165ms) Mar 23 00:17:03.926: INFO: (11) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 4.704257ms) Mar 23 00:17:03.926: INFO: (11) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 4.787919ms) Mar 23 00:17:03.926: INFO: (11) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 4.826204ms) Mar 23 00:17:03.926: INFO: (11) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 4.903955ms) Mar 23 00:17:03.927: INFO: (11) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 5.133946ms) Mar 23 00:17:03.927: INFO: (11) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 5.083691ms) Mar 23 00:17:03.927: INFO: (11) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 5.587299ms) Mar 23 00:17:03.929: INFO: (12) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 1.956964ms) Mar 23 00:17:03.931: INFO: (12) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: ... (200; 4.485484ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 4.492432ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 4.488995ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 4.647791ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 4.640996ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 4.626486ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 4.659185ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 4.659612ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.749445ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 4.852629ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 4.863036ms) Mar 23 00:17:03.932: INFO: (12) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 4.839067ms) Mar 23 00:17:03.933: INFO: (12) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 5.684601ms) Mar 23 00:17:03.933: INFO: (12) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 5.761812ms) Mar 23 00:17:03.936: INFO: (13) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 2.972022ms) Mar 23 00:17:03.936: INFO: (13) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.061768ms) Mar 23 00:17:03.936: INFO: (13) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test<... (200; 3.848719ms) Mar 23 00:17:03.937: INFO: (13) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 3.881885ms) Mar 23 00:17:03.937: INFO: (13) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 3.956021ms) Mar 23 00:17:03.937: INFO: (13) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 4.175286ms) Mar 23 00:17:03.937: INFO: (13) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 4.182294ms) Mar 23 00:17:03.938: INFO: (13) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 4.3421ms) Mar 23 00:17:03.938: INFO: (13) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 4.378035ms) Mar 23 00:17:03.938: INFO: (13) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 4.38695ms) Mar 23 00:17:03.938: INFO: (13) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 4.623171ms) Mar 23 00:17:03.938: INFO: (13) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 4.628326ms) Mar 23 00:17:03.940: INFO: (14) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 1.864644ms) Mar 23 00:17:03.940: INFO: (14) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test (200; 4.044837ms) Mar 23 00:17:03.942: INFO: (14) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.122538ms) Mar 23 00:17:03.942: INFO: (14) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 4.116633ms) Mar 23 00:17:03.942: INFO: (14) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.076211ms) Mar 23 00:17:03.942: INFO: (14) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 4.118045ms) Mar 23 00:17:03.942: INFO: (14) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 4.51193ms) Mar 23 00:17:03.942: INFO: (14) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 4.499066ms) Mar 23 00:17:03.942: INFO: (14) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 4.553183ms) Mar 23 00:17:03.943: INFO: (14) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 4.610732ms) Mar 23 00:17:03.943: INFO: (14) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 4.742181ms) Mar 23 00:17:03.943: INFO: (14) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 4.982625ms) Mar 23 00:17:03.943: INFO: (14) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 4.960728ms) Mar 23 00:17:03.945: INFO: (15) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 2.557146ms) Mar 23 00:17:03.946: INFO: (15) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 2.627114ms) Mar 23 00:17:03.946: INFO: (15) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 2.921854ms) Mar 23 00:17:03.946: INFO: (15) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 3.228729ms) Mar 23 00:17:03.946: INFO: (15) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 3.31798ms) Mar 23 00:17:03.946: INFO: (15) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 3.34076ms) Mar 23 00:17:03.946: INFO: (15) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 3.509071ms) Mar 23 00:17:03.947: INFO: (15) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test (200; 3.774343ms) Mar 23 00:17:03.947: INFO: (15) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 4.472926ms) Mar 23 00:17:03.948: INFO: (15) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 4.609522ms) Mar 23 00:17:03.948: INFO: (15) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 5.293466ms) Mar 23 00:17:03.949: INFO: (15) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 5.600737ms) Mar 23 00:17:03.949: INFO: (15) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 5.559629ms) Mar 23 00:17:03.949: INFO: (15) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 5.848679ms) Mar 23 00:17:03.951: INFO: (16) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 2.129863ms) Mar 23 00:17:03.951: INFO: (16) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 2.285732ms) Mar 23 00:17:03.952: INFO: (16) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 3.319817ms) Mar 23 00:17:03.954: INFO: (16) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.946229ms) Mar 23 00:17:03.954: INFO: (16) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 5.188959ms) Mar 23 00:17:03.956: INFO: (16) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 7.275122ms) Mar 23 00:17:03.957: INFO: (16) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 8.495863ms) Mar 23 00:17:03.958: INFO: (16) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 8.689349ms) Mar 23 00:17:03.958: INFO: (16) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 8.718548ms) Mar 23 00:17:03.958: INFO: (16) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 8.718148ms) Mar 23 00:17:03.958: INFO: (16) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 9.445352ms) Mar 23 00:17:03.958: INFO: (16) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 9.392883ms) Mar 23 00:17:03.958: INFO: (16) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test<... (200; 9.689794ms) Mar 23 00:17:03.961: INFO: (17) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 2.565162ms) Mar 23 00:17:03.961: INFO: (17) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 2.824189ms) Mar 23 00:17:03.961: INFO: (17) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 2.827186ms) Mar 23 00:17:03.961: INFO: (17) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 2.838719ms) Mar 23 00:17:03.962: INFO: (17) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 3.434259ms) Mar 23 00:17:03.962: INFO: (17) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 3.480117ms) Mar 23 00:17:03.962: INFO: (17) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.82425ms) Mar 23 00:17:03.963: INFO: (17) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 3.93532ms) Mar 23 00:17:03.963: INFO: (17) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.927989ms) Mar 23 00:17:03.963: INFO: (17) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 4.038886ms) Mar 23 00:17:03.963: INFO: (17) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 3.994121ms) Mar 23 00:17:03.963: INFO: (17) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.056352ms) Mar 23 00:17:03.963: INFO: (17) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 4.065756ms) Mar 23 00:17:03.963: INFO: (17) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 4.006552ms) Mar 23 00:17:03.963: INFO: (17) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 4.031276ms) Mar 23 00:17:03.963: INFO: (17) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test (200; 3.388051ms) Mar 23 00:17:03.966: INFO: (18) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.351814ms) Mar 23 00:17:03.966: INFO: (18) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: ... (200; 3.510025ms) Mar 23 00:17:03.967: INFO: (18) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 3.962718ms) Mar 23 00:17:03.967: INFO: (18) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:1080/proxy/: test<... (200; 4.174669ms) Mar 23 00:17:03.967: INFO: (18) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 4.26943ms) Mar 23 00:17:03.967: INFO: (18) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 4.415137ms) Mar 23 00:17:03.967: INFO: (18) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 4.457375ms) Mar 23 00:17:03.967: INFO: (18) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 4.457036ms) Mar 23 00:17:03.967: INFO: (18) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 4.720073ms) Mar 23 00:17:03.970: INFO: (19) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 2.794424ms) Mar 23 00:17:03.970: INFO: (19) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:462/proxy/: tls qux (200; 2.82815ms) Mar 23 00:17:03.971: INFO: (19) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:162/proxy/: bar (200; 3.355607ms) Mar 23 00:17:03.971: INFO: (19) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:443/proxy/: test<... (200; 3.426321ms) Mar 23 00:17:03.971: INFO: (19) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.50792ms) Mar 23 00:17:03.971: INFO: (19) /api/v1/namespaces/proxy-8374/pods/proxy-service-b4s6v-vw49k/proxy/: test (200; 3.449945ms) Mar 23 00:17:03.971: INFO: (19) /api/v1/namespaces/proxy-8374/pods/https:proxy-service-b4s6v-vw49k:460/proxy/: tls baz (200; 3.454569ms) Mar 23 00:17:03.971: INFO: (19) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:1080/proxy/: ... (200; 3.492678ms) Mar 23 00:17:03.971: INFO: (19) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname2/proxy/: tls qux (200; 3.502868ms) Mar 23 00:17:03.971: INFO: (19) /api/v1/namespaces/proxy-8374/pods/http:proxy-service-b4s6v-vw49k:160/proxy/: foo (200; 3.502471ms) Mar 23 00:17:03.971: INFO: (19) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname1/proxy/: foo (200; 3.727062ms) Mar 23 00:17:03.971: INFO: (19) /api/v1/namespaces/proxy-8374/services/https:proxy-service-b4s6v:tlsportname1/proxy/: tls baz (200; 3.767185ms) Mar 23 00:17:03.972: INFO: (19) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname2/proxy/: bar (200; 4.028929ms) Mar 23 00:17:03.972: INFO: (19) /api/v1/namespaces/proxy-8374/services/proxy-service-b4s6v:portname2/proxy/: bar (200; 4.069338ms) Mar 23 00:17:03.972: INFO: (19) /api/v1/namespaces/proxy-8374/services/http:proxy-service-b4s6v:portname1/proxy/: foo (200; 4.137065ms) STEP: deleting ReplicationController proxy-service-b4s6v in namespace proxy-8374, will wait for the garbage collector to delete the pods Mar 23 00:17:04.030: INFO: Deleting ReplicationController proxy-service-b4s6v took: 6.275048ms Mar 23 00:17:04.330: INFO: Terminating ReplicationController proxy-service-b4s6v pods took: 300.265875ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:17:13.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8374" for this suite. • [SLOW TEST:17.532 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":143,"skipped":2263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:17:13.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:17:30.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4350" for this suite. • [SLOW TEST:17.582 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":144,"skipped":2306,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:17:30.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 23 00:17:30.897: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:17:46.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8089" for this suite. • [SLOW TEST:15.302 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":145,"skipped":2314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:17:46.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 00:17:46.175: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4762f077-6ea1-4a46-b550-742582c7fb14" in namespace "downward-api-3307" to be "Succeeded or Failed" Mar 23 00:17:46.251: INFO: Pod "downwardapi-volume-4762f077-6ea1-4a46-b550-742582c7fb14": Phase="Pending", Reason="", readiness=false. Elapsed: 76.419076ms Mar 23 00:17:48.255: INFO: Pod "downwardapi-volume-4762f077-6ea1-4a46-b550-742582c7fb14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080647661s Mar 23 00:17:50.258: INFO: Pod "downwardapi-volume-4762f077-6ea1-4a46-b550-742582c7fb14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083725472s STEP: Saw pod success Mar 23 00:17:50.258: INFO: Pod "downwardapi-volume-4762f077-6ea1-4a46-b550-742582c7fb14" satisfied condition "Succeeded or Failed" Mar 23 00:17:50.260: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4762f077-6ea1-4a46-b550-742582c7fb14 container client-container: STEP: delete the pod Mar 23 00:17:50.276: INFO: Waiting for pod downwardapi-volume-4762f077-6ea1-4a46-b550-742582c7fb14 to disappear Mar 23 00:17:50.287: INFO: Pod downwardapi-volume-4762f077-6ea1-4a46-b550-742582c7fb14 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:17:50.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3307" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2338,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:17:50.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 23 00:17:53.411: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:17:53.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1003" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2353,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:17:53.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-467b911d-5e3f-4dbc-9d7b-af14ba9b4128 in namespace container-probe-1486 Mar 23 00:17:57.759: INFO: Started pod busybox-467b911d-5e3f-4dbc-9d7b-af14ba9b4128 in namespace container-probe-1486 STEP: checking the pod's current state and verifying that restartCount is present Mar 23 00:17:57.762: INFO: Initial restart count of pod busybox-467b911d-5e3f-4dbc-9d7b-af14ba9b4128 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:21:58.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1486" for this suite. • [SLOW TEST:244.911 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:21:58.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:22:02.535: INFO: Waiting up to 5m0s for pod "client-envvars-59df25b3-594b-4277-964b-664a06bef5f6" in namespace "pods-487" to be "Succeeded or Failed" Mar 23 00:22:02.544: INFO: Pod "client-envvars-59df25b3-594b-4277-964b-664a06bef5f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.816795ms Mar 23 00:22:04.548: INFO: Pod "client-envvars-59df25b3-594b-4277-964b-664a06bef5f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012669057s Mar 23 00:22:06.552: INFO: Pod "client-envvars-59df25b3-594b-4277-964b-664a06bef5f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016896143s STEP: Saw pod success Mar 23 00:22:06.552: INFO: Pod "client-envvars-59df25b3-594b-4277-964b-664a06bef5f6" satisfied condition "Succeeded or Failed" Mar 23 00:22:06.556: INFO: Trying to get logs from node latest-worker2 pod client-envvars-59df25b3-594b-4277-964b-664a06bef5f6 container env3cont: STEP: delete the pod Mar 23 00:22:06.597: INFO: Waiting for pod client-envvars-59df25b3-594b-4277-964b-664a06bef5f6 to disappear Mar 23 00:22:06.614: INFO: Pod client-envvars-59df25b3-594b-4277-964b-664a06bef5f6 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:22:06.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-487" for this suite. • [SLOW TEST:8.267 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:22:06.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 00:22:06.699: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9fa45622-b3a9-439c-b51c-9e9c7128f254" in namespace "downward-api-801" to be "Succeeded or Failed" Mar 23 00:22:06.710: INFO: Pod "downwardapi-volume-9fa45622-b3a9-439c-b51c-9e9c7128f254": Phase="Pending", Reason="", readiness=false. Elapsed: 10.244589ms Mar 23 00:22:08.713: INFO: Pod "downwardapi-volume-9fa45622-b3a9-439c-b51c-9e9c7128f254": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013671849s Mar 23 00:22:10.717: INFO: Pod "downwardapi-volume-9fa45622-b3a9-439c-b51c-9e9c7128f254": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017889784s STEP: Saw pod success Mar 23 00:22:10.717: INFO: Pod "downwardapi-volume-9fa45622-b3a9-439c-b51c-9e9c7128f254" satisfied condition "Succeeded or Failed" Mar 23 00:22:10.720: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9fa45622-b3a9-439c-b51c-9e9c7128f254 container client-container: STEP: delete the pod Mar 23 00:22:10.768: INFO: Waiting for pod downwardapi-volume-9fa45622-b3a9-439c-b51c-9e9c7128f254 to disappear Mar 23 00:22:10.775: INFO: Pod downwardapi-volume-9fa45622-b3a9-439c-b51c-9e9c7128f254 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:22:10.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-801" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2474,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:22:10.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Mar 23 00:22:10.862: INFO: Waiting up to 5m0s for pod "var-expansion-9cc93395-6f7e-4079-8345-98ce0ae25914" in namespace "var-expansion-2241" to be "Succeeded or Failed" Mar 23 00:22:10.865: INFO: Pod "var-expansion-9cc93395-6f7e-4079-8345-98ce0ae25914": Phase="Pending", Reason="", readiness=false. Elapsed: 3.774395ms Mar 23 00:22:12.870: INFO: Pod "var-expansion-9cc93395-6f7e-4079-8345-98ce0ae25914": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007914942s Mar 23 00:22:14.874: INFO: Pod "var-expansion-9cc93395-6f7e-4079-8345-98ce0ae25914": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012221817s STEP: Saw pod success Mar 23 00:22:14.874: INFO: Pod "var-expansion-9cc93395-6f7e-4079-8345-98ce0ae25914" satisfied condition "Succeeded or Failed" Mar 23 00:22:14.876: INFO: Trying to get logs from node latest-worker2 pod var-expansion-9cc93395-6f7e-4079-8345-98ce0ae25914 container dapi-container: STEP: delete the pod Mar 23 00:22:14.908: INFO: Waiting for pod var-expansion-9cc93395-6f7e-4079-8345-98ce0ae25914 to disappear Mar 23 00:22:14.921: INFO: Pod var-expansion-9cc93395-6f7e-4079-8345-98ce0ae25914 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:22:14.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2241" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2484,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:22:14.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-9debd65d-9f20-471e-b431-9f743cc9fdea STEP: Creating a pod to test consume configMaps Mar 23 00:22:15.043: INFO: Waiting up to 5m0s for pod "pod-configmaps-9919fa54-f300-4332-96ed-ee425177a7d9" in namespace "configmap-4702" to be "Succeeded or Failed" Mar 23 00:22:15.047: INFO: Pod "pod-configmaps-9919fa54-f300-4332-96ed-ee425177a7d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.447428ms Mar 23 00:22:17.050: INFO: Pod "pod-configmaps-9919fa54-f300-4332-96ed-ee425177a7d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006733965s Mar 23 00:22:19.054: INFO: Pod "pod-configmaps-9919fa54-f300-4332-96ed-ee425177a7d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01042931s STEP: Saw pod success Mar 23 00:22:19.054: INFO: Pod "pod-configmaps-9919fa54-f300-4332-96ed-ee425177a7d9" satisfied condition "Succeeded or Failed" Mar 23 00:22:19.056: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-9919fa54-f300-4332-96ed-ee425177a7d9 container configmap-volume-test: STEP: delete the pod Mar 23 00:22:19.088: INFO: Waiting for pod pod-configmaps-9919fa54-f300-4332-96ed-ee425177a7d9 to disappear Mar 23 00:22:19.101: INFO: Pod pod-configmaps-9919fa54-f300-4332-96ed-ee425177a7d9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:22:19.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4702" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2486,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:22:19.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 23 00:22:19.185: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 23 00:22:24.195: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:22:24.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1134" for this suite. • [SLOW TEST:5.220 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":153,"skipped":2488,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:22:24.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 23 00:22:24.382: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:22:32.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3563" for this suite. • [SLOW TEST:8.305 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":154,"skipped":2495,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:22:32.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:22:32.731: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 23 00:22:32.738: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:32.754: INFO: Number of nodes with available pods: 0 Mar 23 00:22:32.754: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:22:33.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:33.763: INFO: Number of nodes with available pods: 0 Mar 23 00:22:33.763: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:22:34.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:34.762: INFO: Number of nodes with available pods: 0 Mar 23 00:22:34.762: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:22:35.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:35.763: INFO: Number of nodes with available pods: 0 Mar 23 00:22:35.763: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:22:36.824: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:36.827: INFO: Number of nodes with available pods: 1 Mar 23 00:22:36.827: INFO: Node latest-worker2 is running more than one daemon pod Mar 23 00:22:37.762: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:37.765: INFO: Number of nodes with available pods: 2 Mar 23 00:22:37.765: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 23 00:22:37.833: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:37.833: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:37.892: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:38.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:38.897: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:38.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:39.897: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:39.897: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:39.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:40.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:40.896: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:40.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:41.900: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:41.900: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:41.900: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:41.903: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:42.897: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:42.897: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:42.897: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:42.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:43.895: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:43.895: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:43.895: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:43.899: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:44.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:44.896: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:44.896: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:44.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:45.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:45.896: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:45.896: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:45.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:46.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:46.896: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:46.896: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:46.899: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:47.897: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:47.897: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:47.897: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:47.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:48.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:48.896: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:48.896: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:48.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:49.902: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:49.902: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:49.902: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:49.906: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:50.900: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:50.900: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:50.900: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:50.903: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:51.903: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:51.903: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:51.903: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:51.907: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:52.897: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:52.897: INFO: Wrong image for pod: daemon-set-pq92t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:52.897: INFO: Pod daemon-set-pq92t is not available Mar 23 00:22:52.927: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:53.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:53.896: INFO: Pod daemon-set-g985g is not available Mar 23 00:22:53.899: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:54.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:54.896: INFO: Pod daemon-set-g985g is not available Mar 23 00:22:54.899: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:55.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:55.899: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:56.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:56.896: INFO: Pod daemon-set-cbwrl is not available Mar 23 00:22:56.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:57.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:57.896: INFO: Pod daemon-set-cbwrl is not available Mar 23 00:22:57.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:58.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:58.896: INFO: Pod daemon-set-cbwrl is not available Mar 23 00:22:58.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:22:59.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:22:59.896: INFO: Pod daemon-set-cbwrl is not available Mar 23 00:22:59.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:23:00.896: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:23:00.896: INFO: Pod daemon-set-cbwrl is not available Mar 23 00:23:00.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:23:01.897: INFO: Wrong image for pod: daemon-set-cbwrl. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 23 00:23:01.897: INFO: Pod daemon-set-cbwrl is not available Mar 23 00:23:01.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:23:02.896: INFO: Pod daemon-set-5w4k7 is not available Mar 23 00:23:02.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 23 00:23:02.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:23:02.914: INFO: Number of nodes with available pods: 1 Mar 23 00:23:02.914: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:23:03.919: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:23:03.921: INFO: Number of nodes with available pods: 1 Mar 23 00:23:03.921: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:23:04.933: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:23:04.936: INFO: Number of nodes with available pods: 1 Mar 23 00:23:04.936: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:23:05.920: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:23:05.925: INFO: Number of nodes with available pods: 1 Mar 23 00:23:05.925: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:23:06.918: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:23:06.922: INFO: Number of nodes with available pods: 2 Mar 23 00:23:06.922: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8775, will wait for the garbage collector to delete the pods Mar 23 00:23:06.993: INFO: Deleting DaemonSet.extensions daemon-set took: 6.178241ms Mar 23 00:23:07.293: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.275982ms Mar 23 00:23:12.996: INFO: Number of nodes with available pods: 0 Mar 23 00:23:12.996: INFO: Number of running nodes: 0, number of available pods: 0 Mar 23 00:23:12.998: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8775/daemonsets","resourceVersion":"2018538"},"items":null} Mar 23 00:23:13.000: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8775/pods","resourceVersion":"2018538"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:23:13.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8775" for this suite. • [SLOW TEST:40.380 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":155,"skipped":2506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:23:13.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 23 00:23:13.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6794' Mar 23 00:23:13.192: INFO: stderr: "" Mar 23 00:23:13.192: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Mar 23 00:23:13.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6794' Mar 23 00:23:22.997: INFO: stderr: "" Mar 23 00:23:22.997: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:23:22.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6794" for this suite. • [SLOW TEST:9.994 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":156,"skipped":2535,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:23:23.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 23 00:23:23.078: INFO: Waiting up to 5m0s for pod "pod-33e5bdfc-688a-4874-bf45-75383cf49637" in namespace "emptydir-2071" to be "Succeeded or Failed" Mar 23 00:23:23.082: INFO: Pod "pod-33e5bdfc-688a-4874-bf45-75383cf49637": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076902ms Mar 23 00:23:25.086: INFO: Pod "pod-33e5bdfc-688a-4874-bf45-75383cf49637": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00834913s Mar 23 00:23:27.091: INFO: Pod "pod-33e5bdfc-688a-4874-bf45-75383cf49637": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012754786s STEP: Saw pod success Mar 23 00:23:27.091: INFO: Pod "pod-33e5bdfc-688a-4874-bf45-75383cf49637" satisfied condition "Succeeded or Failed" Mar 23 00:23:27.094: INFO: Trying to get logs from node latest-worker2 pod pod-33e5bdfc-688a-4874-bf45-75383cf49637 container test-container: STEP: delete the pod Mar 23 00:23:27.113: INFO: Waiting for pod pod-33e5bdfc-688a-4874-bf45-75383cf49637 to disappear Mar 23 00:23:27.118: INFO: Pod pod-33e5bdfc-688a-4874-bf45-75383cf49637 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:23:27.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2071" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2535,"failed":0} SSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:23:27.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7101, will wait for the garbage collector to delete the pods Mar 23 00:23:31.256: INFO: Deleting Job.batch foo took: 7.486846ms Mar 23 00:23:31.556: INFO: Terminating Job.batch foo pods took: 300.23429ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:24:13.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7101" for this suite. • [SLOW TEST:45.961 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":158,"skipped":2539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:24:13.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:24:13.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2265" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":159,"skipped":2580,"failed":0} ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:24:13.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Mar 23 00:24:13.271: INFO: Waiting up to 5m0s for pod "client-containers-ef3caa9d-08af-4e21-bdaf-c35c9c67b972" in namespace "containers-4683" to be "Succeeded or Failed" Mar 23 00:24:13.286: INFO: Pod "client-containers-ef3caa9d-08af-4e21-bdaf-c35c9c67b972": Phase="Pending", Reason="", readiness=false. Elapsed: 14.856041ms Mar 23 00:24:15.377: INFO: Pod "client-containers-ef3caa9d-08af-4e21-bdaf-c35c9c67b972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106012892s Mar 23 00:24:17.381: INFO: Pod "client-containers-ef3caa9d-08af-4e21-bdaf-c35c9c67b972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109987225s STEP: Saw pod success Mar 23 00:24:17.381: INFO: Pod "client-containers-ef3caa9d-08af-4e21-bdaf-c35c9c67b972" satisfied condition "Succeeded or Failed" Mar 23 00:24:17.384: INFO: Trying to get logs from node latest-worker pod client-containers-ef3caa9d-08af-4e21-bdaf-c35c9c67b972 container test-container: STEP: delete the pod Mar 23 00:24:17.438: INFO: Waiting for pod client-containers-ef3caa9d-08af-4e21-bdaf-c35c9c67b972 to disappear Mar 23 00:24:17.446: INFO: Pod client-containers-ef3caa9d-08af-4e21-bdaf-c35c9c67b972 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:24:17.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4683" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:24:17.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-cef54a24-ec79-43a1-a8e3-0027ce56dbc1 STEP: Creating secret with name s-test-opt-upd-4741ae0a-268e-417a-9da5-e732df458389 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-cef54a24-ec79-43a1-a8e3-0027ce56dbc1 STEP: Updating secret s-test-opt-upd-4741ae0a-268e-417a-9da5-e732df458389 STEP: Creating secret with name s-test-opt-create-2088671b-206f-4411-97f0-3be290c3bb52 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:24:25.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4667" for this suite. • [SLOW TEST:8.217 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2617,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:24:25.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-6117/secret-test-f013f09e-1c82-41b9-842f-1213fd057b7c STEP: Creating a pod to test consume secrets Mar 23 00:24:25.772: INFO: Waiting up to 5m0s for pod "pod-configmaps-28bed47f-0617-4d9e-a9d1-b59b95faa350" in namespace "secrets-6117" to be "Succeeded or Failed" Mar 23 00:24:25.775: INFO: Pod "pod-configmaps-28bed47f-0617-4d9e-a9d1-b59b95faa350": Phase="Pending", Reason="", readiness=false. Elapsed: 3.205774ms Mar 23 00:24:29.450: INFO: Pod "pod-configmaps-28bed47f-0617-4d9e-a9d1-b59b95faa350": Phase="Pending", Reason="", readiness=false. Elapsed: 3.677545351s Mar 23 00:24:31.454: INFO: Pod "pod-configmaps-28bed47f-0617-4d9e-a9d1-b59b95faa350": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.682067351s STEP: Saw pod success Mar 23 00:24:31.454: INFO: Pod "pod-configmaps-28bed47f-0617-4d9e-a9d1-b59b95faa350" satisfied condition "Succeeded or Failed" Mar 23 00:24:31.457: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-28bed47f-0617-4d9e-a9d1-b59b95faa350 container env-test: STEP: delete the pod Mar 23 00:24:31.599: INFO: Waiting for pod pod-configmaps-28bed47f-0617-4d9e-a9d1-b59b95faa350 to disappear Mar 23 00:24:31.602: INFO: Pod pod-configmaps-28bed47f-0617-4d9e-a9d1-b59b95faa350 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:24:31.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6117" for this suite. • [SLOW TEST:6.143 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2716,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:24:31.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 23 00:24:31.913: INFO: Waiting up to 5m0s for pod "downward-api-f2194c0d-5f6b-4a5e-9d01-955f982b7b52" in namespace "downward-api-5211" to be "Succeeded or Failed" Mar 23 00:24:31.940: INFO: Pod "downward-api-f2194c0d-5f6b-4a5e-9d01-955f982b7b52": Phase="Pending", Reason="", readiness=false. Elapsed: 26.821649ms Mar 23 00:24:33.958: INFO: Pod "downward-api-f2194c0d-5f6b-4a5e-9d01-955f982b7b52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044242007s Mar 23 00:24:35.961: INFO: Pod "downward-api-f2194c0d-5f6b-4a5e-9d01-955f982b7b52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047768782s STEP: Saw pod success Mar 23 00:24:35.961: INFO: Pod "downward-api-f2194c0d-5f6b-4a5e-9d01-955f982b7b52" satisfied condition "Succeeded or Failed" Mar 23 00:24:35.964: INFO: Trying to get logs from node latest-worker2 pod downward-api-f2194c0d-5f6b-4a5e-9d01-955f982b7b52 container dapi-container: STEP: delete the pod Mar 23 00:24:35.997: INFO: Waiting for pod downward-api-f2194c0d-5f6b-4a5e-9d01-955f982b7b52 to disappear Mar 23 00:24:36.012: INFO: Pod downward-api-f2194c0d-5f6b-4a5e-9d01-955f982b7b52 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:24:36.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5211" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2722,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:24:36.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 00:24:36.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec9233da-5282-4cb3-87c4-75254c59f761" in namespace "projected-9678" to be "Succeeded or Failed" Mar 23 00:24:36.174: INFO: Pod "downwardapi-volume-ec9233da-5282-4cb3-87c4-75254c59f761": Phase="Pending", Reason="", readiness=false. Elapsed: 51.737002ms Mar 23 00:24:38.197: INFO: Pod "downwardapi-volume-ec9233da-5282-4cb3-87c4-75254c59f761": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075517833s Mar 23 00:24:40.202: INFO: Pod "downwardapi-volume-ec9233da-5282-4cb3-87c4-75254c59f761": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079767938s STEP: Saw pod success Mar 23 00:24:40.202: INFO: Pod "downwardapi-volume-ec9233da-5282-4cb3-87c4-75254c59f761" satisfied condition "Succeeded or Failed" Mar 23 00:24:40.204: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ec9233da-5282-4cb3-87c4-75254c59f761 container client-container: STEP: delete the pod Mar 23 00:24:40.358: INFO: Waiting for pod downwardapi-volume-ec9233da-5282-4cb3-87c4-75254c59f761 to disappear Mar 23 00:24:40.370: INFO: Pod downwardapi-volume-ec9233da-5282-4cb3-87c4-75254c59f761 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:24:40.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9678" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2724,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:24:40.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 23 00:24:48.525: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 23 00:24:48.546: INFO: Pod pod-with-poststart-exec-hook still exists Mar 23 00:24:50.547: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 23 00:24:50.551: INFO: Pod pod-with-poststart-exec-hook still exists Mar 23 00:24:52.547: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 23 00:24:52.549: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:24:52.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9708" for this suite. • [SLOW TEST:12.180 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2747,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:24:52.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Mar 23 00:24:56.658: INFO: Pod pod-hostip-f729cff1-4d66-40f3-8f25-b30f3b8e14fd has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:24:56.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-46" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:24:56.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 00:24:56.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41c7efdd-6b06-4393-b15c-5b90dd58d5e0" in namespace "downward-api-262" to be "Succeeded or Failed" Mar 23 00:24:56.773: INFO: Pod "downwardapi-volume-41c7efdd-6b06-4393-b15c-5b90dd58d5e0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.860498ms Mar 23 00:24:58.784: INFO: Pod "downwardapi-volume-41c7efdd-6b06-4393-b15c-5b90dd58d5e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01492393s Mar 23 00:25:00.789: INFO: Pod "downwardapi-volume-41c7efdd-6b06-4393-b15c-5b90dd58d5e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019573748s STEP: Saw pod success Mar 23 00:25:00.789: INFO: Pod "downwardapi-volume-41c7efdd-6b06-4393-b15c-5b90dd58d5e0" satisfied condition "Succeeded or Failed" Mar 23 00:25:00.792: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-41c7efdd-6b06-4393-b15c-5b90dd58d5e0 container client-container: STEP: delete the pod Mar 23 00:25:00.844: INFO: Waiting for pod downwardapi-volume-41c7efdd-6b06-4393-b15c-5b90dd58d5e0 to disappear Mar 23 00:25:00.850: INFO: Pod downwardapi-volume-41c7efdd-6b06-4393-b15c-5b90dd58d5e0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:25:00.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-262" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2864,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:25:00.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-818 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-818;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-818 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-818;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-818.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-818.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-818.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-818.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-818.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-818.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-818.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-818.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-818.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-818.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-818.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-818.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 76.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.76_udp@PTR;check="$$(dig +tcp +noall +answer +search 76.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.76_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-818 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-818;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-818 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-818;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-818.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-818.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-818.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-818.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-818.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-818.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-818.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-818.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-818.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-818.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-818.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-818.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-818.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 76.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.76_udp@PTR;check="$$(dig +tcp +noall +answer +search 76.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.76_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 23 00:25:07.034: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.040: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.043: INFO: Unable to read wheezy_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.066: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.072: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.075: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.077: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.095: INFO: Unable to read jessie_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.097: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.099: INFO: Unable to read jessie_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.101: INFO: Unable to read jessie_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.104: INFO: Unable to read jessie_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.107: INFO: Unable to read jessie_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.110: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.112: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:07.130: INFO: Lookups using dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-818 wheezy_tcp@dns-test-service.dns-818 wheezy_udp@dns-test-service.dns-818.svc wheezy_tcp@dns-test-service.dns-818.svc wheezy_udp@_http._tcp.dns-test-service.dns-818.svc wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-818 jessie_tcp@dns-test-service.dns-818 jessie_udp@dns-test-service.dns-818.svc jessie_tcp@dns-test-service.dns-818.svc jessie_udp@_http._tcp.dns-test-service.dns-818.svc jessie_tcp@_http._tcp.dns-test-service.dns-818.svc] Mar 23 00:25:12.136: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.139: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.143: INFO: Unable to read wheezy_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.147: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.150: INFO: Unable to read wheezy_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.152: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.156: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.159: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.177: INFO: Unable to read jessie_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.179: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.182: INFO: Unable to read jessie_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.185: INFO: Unable to read jessie_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.188: INFO: Unable to read jessie_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.190: INFO: Unable to read jessie_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.193: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.196: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:12.212: INFO: Lookups using dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-818 wheezy_tcp@dns-test-service.dns-818 wheezy_udp@dns-test-service.dns-818.svc wheezy_tcp@dns-test-service.dns-818.svc wheezy_udp@_http._tcp.dns-test-service.dns-818.svc wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-818 jessie_tcp@dns-test-service.dns-818 jessie_udp@dns-test-service.dns-818.svc jessie_tcp@dns-test-service.dns-818.svc jessie_udp@_http._tcp.dns-test-service.dns-818.svc jessie_tcp@_http._tcp.dns-test-service.dns-818.svc] Mar 23 00:25:17.156: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.159: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.162: INFO: Unable to read wheezy_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.169: INFO: Unable to read wheezy_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.172: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.174: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.178: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.210: INFO: Unable to read jessie_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.213: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.216: INFO: Unable to read jessie_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.218: INFO: Unable to read jessie_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.221: INFO: Unable to read jessie_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.224: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.226: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:17.238: INFO: Lookups using dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-818 wheezy_tcp@dns-test-service.dns-818 wheezy_udp@dns-test-service.dns-818.svc wheezy_tcp@dns-test-service.dns-818.svc wheezy_udp@_http._tcp.dns-test-service.dns-818.svc wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-818 jessie_tcp@dns-test-service.dns-818 jessie_udp@dns-test-service.dns-818.svc jessie_tcp@dns-test-service.dns-818.svc jessie_udp@_http._tcp.dns-test-service.dns-818.svc jessie_tcp@_http._tcp.dns-test-service.dns-818.svc] Mar 23 00:25:22.135: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.138: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.142: INFO: Unable to read wheezy_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.146: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.150: INFO: Unable to read wheezy_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.152: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.155: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.157: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.175: INFO: Unable to read jessie_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.178: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.180: INFO: Unable to read jessie_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.183: INFO: Unable to read jessie_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.186: INFO: Unable to read jessie_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.189: INFO: Unable to read jessie_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.192: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.194: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:22.212: INFO: Lookups using dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-818 wheezy_tcp@dns-test-service.dns-818 wheezy_udp@dns-test-service.dns-818.svc wheezy_tcp@dns-test-service.dns-818.svc wheezy_udp@_http._tcp.dns-test-service.dns-818.svc wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-818 jessie_tcp@dns-test-service.dns-818 jessie_udp@dns-test-service.dns-818.svc jessie_tcp@dns-test-service.dns-818.svc jessie_udp@_http._tcp.dns-test-service.dns-818.svc jessie_tcp@_http._tcp.dns-test-service.dns-818.svc] Mar 23 00:25:27.151: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.154: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.157: INFO: Unable to read wheezy_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.160: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.169: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.171: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.195: INFO: Unable to read jessie_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.198: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.200: INFO: Unable to read jessie_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.204: INFO: Unable to read jessie_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.206: INFO: Unable to read jessie_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.209: INFO: Unable to read jessie_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.211: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.214: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:27.240: INFO: Lookups using dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-818 wheezy_tcp@dns-test-service.dns-818 wheezy_udp@dns-test-service.dns-818.svc wheezy_tcp@dns-test-service.dns-818.svc wheezy_udp@_http._tcp.dns-test-service.dns-818.svc wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-818 jessie_tcp@dns-test-service.dns-818 jessie_udp@dns-test-service.dns-818.svc jessie_tcp@dns-test-service.dns-818.svc jessie_udp@_http._tcp.dns-test-service.dns-818.svc jessie_tcp@_http._tcp.dns-test-service.dns-818.svc] Mar 23 00:25:32.135: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.138: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.141: INFO: Unable to read wheezy_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.145: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.148: INFO: Unable to read wheezy_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.151: INFO: Unable to read wheezy_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.153: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.156: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.176: INFO: Unable to read jessie_udp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.179: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.182: INFO: Unable to read jessie_udp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.185: INFO: Unable to read jessie_tcp@dns-test-service.dns-818 from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.188: INFO: Unable to read jessie_udp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.191: INFO: Unable to read jessie_tcp@dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.194: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.197: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-818.svc from pod dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9: the server could not find the requested resource (get pods dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9) Mar 23 00:25:32.216: INFO: Lookups using dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-818 wheezy_tcp@dns-test-service.dns-818 wheezy_udp@dns-test-service.dns-818.svc wheezy_tcp@dns-test-service.dns-818.svc wheezy_udp@_http._tcp.dns-test-service.dns-818.svc wheezy_tcp@_http._tcp.dns-test-service.dns-818.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-818 jessie_tcp@dns-test-service.dns-818 jessie_udp@dns-test-service.dns-818.svc jessie_tcp@dns-test-service.dns-818.svc jessie_udp@_http._tcp.dns-test-service.dns-818.svc jessie_tcp@_http._tcp.dns-test-service.dns-818.svc] Mar 23 00:25:37.253: INFO: DNS probes using dns-818/dns-test-4f0a41ad-fcea-4694-b5b3-f2f1024a09d9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:25:37.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-818" for this suite. • [SLOW TEST:36.955 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":168,"skipped":2889,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:25:37.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:25:37.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 23 00:25:38.447: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-23T00:25:38Z generation:1 name:name1 resourceVersion:2019392 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ca80a0a2-ac7c-4882-9166-f1517dd183f1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 23 00:25:48.452: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-23T00:25:48Z generation:1 name:name2 resourceVersion:2019439 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8c7585ba-9dd4-4d5f-b551-dc019e204828] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 23 00:25:58.459: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-23T00:25:38Z generation:2 name:name1 resourceVersion:2019469 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ca80a0a2-ac7c-4882-9166-f1517dd183f1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 23 00:26:08.464: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-23T00:25:48Z generation:2 name:name2 resourceVersion:2019499 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8c7585ba-9dd4-4d5f-b551-dc019e204828] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 23 00:26:18.472: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-23T00:25:38Z generation:2 name:name1 resourceVersion:2019529 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ca80a0a2-ac7c-4882-9166-f1517dd183f1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 23 00:26:28.484: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-23T00:25:48Z generation:2 name:name2 resourceVersion:2019558 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8c7585ba-9dd4-4d5f-b551-dc019e204828] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:26:38.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8243" for this suite. • [SLOW TEST:61.198 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":169,"skipped":2904,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:26:39.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:27:39.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1943" for this suite. • [SLOW TEST:60.094 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2911,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:27:39.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:27:39.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-738" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":171,"skipped":2935,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:27:39.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 23 00:27:39.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8167' Mar 23 00:27:42.006: INFO: stderr: "" Mar 23 00:27:42.006: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 23 00:27:47.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8167 -o json' Mar 23 00:27:47.154: INFO: stderr: "" Mar 23 00:27:47.154: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-23T00:27:41Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8167\",\n \"resourceVersion\": \"2019832\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8167/pods/e2e-test-httpd-pod\",\n \"uid\": \"b7adc847-47b2-4c29-bfe0-4daf610442fc\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-n886m\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-n886m\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-n886m\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-23T00:27:42Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-23T00:27:44Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-23T00:27:44Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-23T00:27:41Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d4ddb4b35a66308846bcd3a0a127b362500a069f886d14fd57ab82a589a17b5b\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-23T00:27:44Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.249\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.249\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-23T00:27:42Z\"\n }\n}\n" STEP: replace the image in the pod Mar 23 00:27:47.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8167' Mar 23 00:27:47.470: INFO: stderr: "" Mar 23 00:27:47.470: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 23 00:27:47.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8167' Mar 23 00:27:52.765: INFO: stderr: "" Mar 23 00:27:52.765: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:27:52.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8167" for this suite. • [SLOW TEST:13.581 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":172,"skipped":2949,"failed":0} [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:27:52.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 23 00:27:52.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9384' Mar 23 00:27:53.070: INFO: stderr: "" Mar 23 00:27:53.070: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 23 00:27:53.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9384' Mar 23 00:27:53.183: INFO: stderr: "" Mar 23 00:27:53.183: INFO: stdout: "update-demo-nautilus-cp57r update-demo-nautilus-w8w2l " Mar 23 00:27:53.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cp57r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:27:53.276: INFO: stderr: "" Mar 23 00:27:53.276: INFO: stdout: "" Mar 23 00:27:53.276: INFO: update-demo-nautilus-cp57r is created but not running Mar 23 00:27:58.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9384' Mar 23 00:27:58.385: INFO: stderr: "" Mar 23 00:27:58.385: INFO: stdout: "update-demo-nautilus-cp57r update-demo-nautilus-w8w2l " Mar 23 00:27:58.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cp57r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:27:58.483: INFO: stderr: "" Mar 23 00:27:58.483: INFO: stdout: "true" Mar 23 00:27:58.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cp57r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:27:58.559: INFO: stderr: "" Mar 23 00:27:58.559: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 23 00:27:58.559: INFO: validating pod update-demo-nautilus-cp57r Mar 23 00:27:58.562: INFO: got data: { "image": "nautilus.jpg" } Mar 23 00:27:58.563: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 23 00:27:58.563: INFO: update-demo-nautilus-cp57r is verified up and running Mar 23 00:27:58.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8w2l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:27:58.656: INFO: stderr: "" Mar 23 00:27:58.656: INFO: stdout: "true" Mar 23 00:27:58.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8w2l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:27:58.748: INFO: stderr: "" Mar 23 00:27:58.748: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 23 00:27:58.749: INFO: validating pod update-demo-nautilus-w8w2l Mar 23 00:27:58.752: INFO: got data: { "image": "nautilus.jpg" } Mar 23 00:27:58.752: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 23 00:27:58.752: INFO: update-demo-nautilus-w8w2l is verified up and running STEP: scaling down the replication controller Mar 23 00:27:58.756: INFO: scanned /root for discovery docs: Mar 23 00:27:58.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9384' Mar 23 00:27:59.864: INFO: stderr: "" Mar 23 00:27:59.864: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 23 00:27:59.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9384' Mar 23 00:27:59.956: INFO: stderr: "" Mar 23 00:27:59.956: INFO: stdout: "update-demo-nautilus-cp57r update-demo-nautilus-w8w2l " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 23 00:28:04.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9384' Mar 23 00:28:05.047: INFO: stderr: "" Mar 23 00:28:05.047: INFO: stdout: "update-demo-nautilus-w8w2l " Mar 23 00:28:05.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8w2l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:28:05.145: INFO: stderr: "" Mar 23 00:28:05.145: INFO: stdout: "true" Mar 23 00:28:05.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8w2l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:28:05.244: INFO: stderr: "" Mar 23 00:28:05.244: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 23 00:28:05.244: INFO: validating pod update-demo-nautilus-w8w2l Mar 23 00:28:05.248: INFO: got data: { "image": "nautilus.jpg" } Mar 23 00:28:05.248: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 23 00:28:05.248: INFO: update-demo-nautilus-w8w2l is verified up and running STEP: scaling up the replication controller Mar 23 00:28:05.250: INFO: scanned /root for discovery docs: Mar 23 00:28:05.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9384' Mar 23 00:28:06.405: INFO: stderr: "" Mar 23 00:28:06.405: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 23 00:28:06.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9384' Mar 23 00:28:06.500: INFO: stderr: "" Mar 23 00:28:06.500: INFO: stdout: "update-demo-nautilus-mkh2j update-demo-nautilus-w8w2l " Mar 23 00:28:06.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mkh2j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:28:06.597: INFO: stderr: "" Mar 23 00:28:06.597: INFO: stdout: "" Mar 23 00:28:06.597: INFO: update-demo-nautilus-mkh2j is created but not running Mar 23 00:28:11.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9384' Mar 23 00:28:11.712: INFO: stderr: "" Mar 23 00:28:11.712: INFO: stdout: "update-demo-nautilus-mkh2j update-demo-nautilus-w8w2l " Mar 23 00:28:11.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mkh2j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:28:11.805: INFO: stderr: "" Mar 23 00:28:11.805: INFO: stdout: "true" Mar 23 00:28:11.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mkh2j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:28:11.902: INFO: stderr: "" Mar 23 00:28:11.902: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 23 00:28:11.902: INFO: validating pod update-demo-nautilus-mkh2j Mar 23 00:28:11.907: INFO: got data: { "image": "nautilus.jpg" } Mar 23 00:28:11.907: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 23 00:28:11.907: INFO: update-demo-nautilus-mkh2j is verified up and running Mar 23 00:28:11.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8w2l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:28:11.991: INFO: stderr: "" Mar 23 00:28:11.991: INFO: stdout: "true" Mar 23 00:28:11.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8w2l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9384' Mar 23 00:28:12.075: INFO: stderr: "" Mar 23 00:28:12.075: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 23 00:28:12.075: INFO: validating pod update-demo-nautilus-w8w2l Mar 23 00:28:12.077: INFO: got data: { "image": "nautilus.jpg" } Mar 23 00:28:12.078: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 23 00:28:12.078: INFO: update-demo-nautilus-w8w2l is verified up and running STEP: using delete to clean up resources Mar 23 00:28:12.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9384' Mar 23 00:28:12.216: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 23 00:28:12.217: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 23 00:28:12.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9384' Mar 23 00:28:12.334: INFO: stderr: "No resources found in kubectl-9384 namespace.\n" Mar 23 00:28:12.334: INFO: stdout: "" Mar 23 00:28:12.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9384 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 23 00:28:12.440: INFO: stderr: "" Mar 23 00:28:12.441: INFO: stdout: "update-demo-nautilus-mkh2j\nupdate-demo-nautilus-w8w2l\n" Mar 23 00:28:12.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9384' Mar 23 00:28:13.051: INFO: stderr: "No resources found in kubectl-9384 namespace.\n" Mar 23 00:28:13.051: INFO: stdout: "" Mar 23 00:28:13.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9384 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 23 00:28:13.152: INFO: stderr: "" Mar 23 00:28:13.152: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:28:13.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9384" for this suite. • [SLOW TEST:20.371 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":173,"skipped":2949,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:28:13.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-481 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-481 I0323 00:28:13.367693 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-481, replica count: 2 I0323 00:28:16.418167 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 00:28:19.418434 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 23 00:28:19.418: INFO: Creating new exec pod Mar 23 00:28:24.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-481 execpodbxbnz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 23 00:28:24.682: INFO: stderr: "I0323 00:28:24.574252 1891 log.go:172] (0xc000a7ee70) (0xc000be03c0) Create stream\nI0323 00:28:24.574300 1891 log.go:172] (0xc000a7ee70) (0xc000be03c0) Stream added, broadcasting: 1\nI0323 00:28:24.577334 1891 log.go:172] (0xc000a7ee70) Reply frame received for 1\nI0323 00:28:24.577396 1891 log.go:172] (0xc000a7ee70) (0xc000be0460) Create stream\nI0323 00:28:24.577419 1891 log.go:172] (0xc000a7ee70) (0xc000be0460) Stream added, broadcasting: 3\nI0323 00:28:24.578379 1891 log.go:172] (0xc000a7ee70) Reply frame received for 3\nI0323 00:28:24.578412 1891 log.go:172] (0xc000a7ee70) (0xc000be0500) Create stream\nI0323 00:28:24.578420 1891 log.go:172] (0xc000a7ee70) (0xc000be0500) Stream added, broadcasting: 5\nI0323 00:28:24.579225 1891 log.go:172] (0xc000a7ee70) Reply frame received for 5\nI0323 00:28:24.674799 1891 log.go:172] (0xc000a7ee70) Data frame received for 5\nI0323 00:28:24.674838 1891 log.go:172] (0xc000be0500) (5) Data frame handling\nI0323 00:28:24.674857 1891 log.go:172] (0xc000be0500) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0323 00:28:24.675380 1891 log.go:172] (0xc000a7ee70) Data frame received for 5\nI0323 00:28:24.675414 1891 log.go:172] (0xc000be0500) (5) Data frame handling\nI0323 00:28:24.675482 1891 log.go:172] (0xc000be0500) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0323 00:28:24.675634 1891 log.go:172] (0xc000a7ee70) Data frame received for 5\nI0323 00:28:24.675653 1891 log.go:172] (0xc000be0500) (5) Data frame handling\nI0323 00:28:24.675672 1891 log.go:172] (0xc000a7ee70) Data frame received for 3\nI0323 00:28:24.675680 1891 log.go:172] (0xc000be0460) (3) Data frame handling\nI0323 00:28:24.677711 1891 log.go:172] (0xc000a7ee70) Data frame received for 1\nI0323 00:28:24.677744 1891 log.go:172] (0xc000be03c0) (1) Data frame handling\nI0323 00:28:24.677765 1891 log.go:172] (0xc000be03c0) (1) Data frame sent\nI0323 00:28:24.677806 1891 log.go:172] (0xc000a7ee70) (0xc000be03c0) Stream removed, broadcasting: 1\nI0323 00:28:24.677840 1891 log.go:172] (0xc000a7ee70) Go away received\nI0323 00:28:24.678351 1891 log.go:172] (0xc000a7ee70) (0xc000be03c0) Stream removed, broadcasting: 1\nI0323 00:28:24.678378 1891 log.go:172] (0xc000a7ee70) (0xc000be0460) Stream removed, broadcasting: 3\nI0323 00:28:24.678398 1891 log.go:172] (0xc000a7ee70) (0xc000be0500) Stream removed, broadcasting: 5\n" Mar 23 00:28:24.682: INFO: stdout: "" Mar 23 00:28:24.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-481 execpodbxbnz -- /bin/sh -x -c nc -zv -t -w 2 10.96.60.133 80' Mar 23 00:28:24.882: INFO: stderr: "I0323 00:28:24.803789 1911 log.go:172] (0xc0009d04d0) (0xc0008f6000) Create stream\nI0323 00:28:24.803877 1911 log.go:172] (0xc0009d04d0) (0xc0008f6000) Stream added, broadcasting: 1\nI0323 00:28:24.811533 1911 log.go:172] (0xc0009d04d0) Reply frame received for 1\nI0323 00:28:24.811586 1911 log.go:172] (0xc0009d04d0) (0xc0008f60a0) Create stream\nI0323 00:28:24.811595 1911 log.go:172] (0xc0009d04d0) (0xc0008f60a0) Stream added, broadcasting: 3\nI0323 00:28:24.812610 1911 log.go:172] (0xc0009d04d0) Reply frame received for 3\nI0323 00:28:24.812654 1911 log.go:172] (0xc0009d04d0) (0xc0006f52c0) Create stream\nI0323 00:28:24.812674 1911 log.go:172] (0xc0009d04d0) (0xc0006f52c0) Stream added, broadcasting: 5\nI0323 00:28:24.813639 1911 log.go:172] (0xc0009d04d0) Reply frame received for 5\nI0323 00:28:24.877080 1911 log.go:172] (0xc0009d04d0) Data frame received for 3\nI0323 00:28:24.877221 1911 log.go:172] (0xc0008f60a0) (3) Data frame handling\nI0323 00:28:24.877284 1911 log.go:172] (0xc0009d04d0) Data frame received for 5\nI0323 00:28:24.877341 1911 log.go:172] (0xc0006f52c0) (5) Data frame handling\nI0323 00:28:24.877369 1911 log.go:172] (0xc0006f52c0) (5) Data frame sent\nI0323 00:28:24.877394 1911 log.go:172] (0xc0009d04d0) Data frame received for 5\nI0323 00:28:24.877416 1911 log.go:172] (0xc0006f52c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.60.133 80\nConnection to 10.96.60.133 80 port [tcp/http] succeeded!\nI0323 00:28:24.878594 1911 log.go:172] (0xc0009d04d0) Data frame received for 1\nI0323 00:28:24.878631 1911 log.go:172] (0xc0008f6000) (1) Data frame handling\nI0323 00:28:24.878661 1911 log.go:172] (0xc0008f6000) (1) Data frame sent\nI0323 00:28:24.878686 1911 log.go:172] (0xc0009d04d0) (0xc0008f6000) Stream removed, broadcasting: 1\nI0323 00:28:24.878711 1911 log.go:172] (0xc0009d04d0) Go away received\nI0323 00:28:24.878985 1911 log.go:172] (0xc0009d04d0) (0xc0008f6000) Stream removed, broadcasting: 1\nI0323 00:28:24.878999 1911 log.go:172] (0xc0009d04d0) (0xc0008f60a0) Stream removed, broadcasting: 3\nI0323 00:28:24.879005 1911 log.go:172] (0xc0009d04d0) (0xc0006f52c0) Stream removed, broadcasting: 5\n" Mar 23 00:28:24.882: INFO: stdout: "" Mar 23 00:28:24.882: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:28:24.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-481" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.773 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":174,"skipped":2953,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:28:24.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:28:25.005: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:28:26.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7872" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":175,"skipped":2954,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:28:26.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:28:26.238: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 23 00:28:26.266: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 23 00:28:31.269: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 23 00:28:31.269: INFO: Creating deployment "test-rolling-update-deployment" Mar 23 00:28:31.272: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 23 00:28:31.301: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 23 00:28:33.308: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 23 00:28:33.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520111, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520111, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520111, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520111, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 00:28:35.332: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 23 00:28:35.342: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6947 /apis/apps/v1/namespaces/deployment-6947/deployments/test-rolling-update-deployment c0bde3cb-2685-43ea-9ea0-3e3dfce20fb0 2020235 1 2020-03-23 00:28:31 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004867018 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-23 00:28:31 +0000 UTC,LastTransitionTime:2020-03-23 00:28:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-03-23 00:28:34 +0000 UTC,LastTransitionTime:2020-03-23 00:28:31 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 23 00:28:35.346: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-6947 /apis/apps/v1/namespaces/deployment-6947/replicasets/test-rolling-update-deployment-664dd8fc7f 4c60367a-5340-4961-9e51-02042ac455b4 2020224 1 2020-03-23 00:28:31 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c0bde3cb-2685-43ea-9ea0-3e3dfce20fb0 0xc004867557 0xc004867558}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048675c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 23 00:28:35.346: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 23 00:28:35.346: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6947 /apis/apps/v1/namespaces/deployment-6947/replicasets/test-rolling-update-controller 1d5257a6-c969-48bf-825e-e24a971ccfec 2020233 2 2020-03-23 00:28:26 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c0bde3cb-2685-43ea-9ea0-3e3dfce20fb0 0xc00486746f 0xc004867480}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0048674e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 23 00:28:35.350: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-vfsls" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-vfsls test-rolling-update-deployment-664dd8fc7f- deployment-6947 /api/v1/namespaces/deployment-6947/pods/test-rolling-update-deployment-664dd8fc7f-vfsls 89831c80-2d8a-4a06-8cbf-15408278b550 2020223 0 2020-03-23 00:28:31 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 4c60367a-5340-4961-9e51-02042ac455b4 0xc004867aa7 0xc004867aa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rdm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rdm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rdm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:28:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:28:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:28:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 00:28:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.129,StartTime:2020-03-23 00:28:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 00:28:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://bfe5650a2cfc27b7449c933c1a9cc2af6e467fb055cee4118a2d2a5d2106cd4f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:28:35.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6947" for this suite. • [SLOW TEST:9.170 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":176,"skipped":2978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:28:35.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8206 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8206 STEP: creating replication controller externalsvc in namespace services-8206 I0323 00:28:35.514717 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8206, replica count: 2 I0323 00:28:38.565330 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 00:28:41.565579 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 23 00:28:41.610: INFO: Creating new exec pod Mar 23 00:28:45.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8206 execpodd852q -- /bin/sh -x -c nslookup clusterip-service' Mar 23 00:28:45.857: INFO: stderr: "I0323 00:28:45.775782 1934 log.go:172] (0xc000581ef0) (0xc0006c14a0) Create stream\nI0323 00:28:45.775848 1934 log.go:172] (0xc000581ef0) (0xc0006c14a0) Stream added, broadcasting: 1\nI0323 00:28:45.778785 1934 log.go:172] (0xc000581ef0) Reply frame received for 1\nI0323 00:28:45.778838 1934 log.go:172] (0xc000581ef0) (0xc00069e000) Create stream\nI0323 00:28:45.778853 1934 log.go:172] (0xc000581ef0) (0xc00069e000) Stream added, broadcasting: 3\nI0323 00:28:45.780024 1934 log.go:172] (0xc000581ef0) Reply frame received for 3\nI0323 00:28:45.780078 1934 log.go:172] (0xc000581ef0) (0xc0006c1540) Create stream\nI0323 00:28:45.780095 1934 log.go:172] (0xc000581ef0) (0xc0006c1540) Stream added, broadcasting: 5\nI0323 00:28:45.781641 1934 log.go:172] (0xc000581ef0) Reply frame received for 5\nI0323 00:28:45.843079 1934 log.go:172] (0xc000581ef0) Data frame received for 5\nI0323 00:28:45.843127 1934 log.go:172] (0xc0006c1540) (5) Data frame handling\nI0323 00:28:45.843154 1934 log.go:172] (0xc0006c1540) (5) Data frame sent\n+ nslookup clusterip-service\nI0323 00:28:45.849451 1934 log.go:172] (0xc000581ef0) Data frame received for 3\nI0323 00:28:45.849473 1934 log.go:172] (0xc00069e000) (3) Data frame handling\nI0323 00:28:45.849491 1934 log.go:172] (0xc00069e000) (3) Data frame sent\nI0323 00:28:45.850488 1934 log.go:172] (0xc000581ef0) Data frame received for 3\nI0323 00:28:45.850510 1934 log.go:172] (0xc00069e000) (3) Data frame handling\nI0323 00:28:45.850531 1934 log.go:172] (0xc00069e000) (3) Data frame sent\nI0323 00:28:45.850842 1934 log.go:172] (0xc000581ef0) Data frame received for 3\nI0323 00:28:45.850869 1934 log.go:172] (0xc00069e000) (3) Data frame handling\nI0323 00:28:45.851087 1934 log.go:172] (0xc000581ef0) Data frame received for 5\nI0323 00:28:45.851101 1934 log.go:172] (0xc0006c1540) (5) Data frame handling\nI0323 00:28:45.852511 1934 log.go:172] (0xc000581ef0) Data frame received for 1\nI0323 00:28:45.852525 1934 log.go:172] (0xc0006c14a0) (1) Data frame handling\nI0323 00:28:45.852563 1934 log.go:172] (0xc0006c14a0) (1) Data frame sent\nI0323 00:28:45.852581 1934 log.go:172] (0xc000581ef0) (0xc0006c14a0) Stream removed, broadcasting: 1\nI0323 00:28:45.852676 1934 log.go:172] (0xc000581ef0) Go away received\nI0323 00:28:45.852931 1934 log.go:172] (0xc000581ef0) (0xc0006c14a0) Stream removed, broadcasting: 1\nI0323 00:28:45.852950 1934 log.go:172] (0xc000581ef0) (0xc00069e000) Stream removed, broadcasting: 3\nI0323 00:28:45.852960 1934 log.go:172] (0xc000581ef0) (0xc0006c1540) Stream removed, broadcasting: 5\n" Mar 23 00:28:45.857: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8206.svc.cluster.local\tcanonical name = externalsvc.services-8206.svc.cluster.local.\nName:\texternalsvc.services-8206.svc.cluster.local\nAddress: 10.96.153.115\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8206, will wait for the garbage collector to delete the pods Mar 23 00:28:45.917: INFO: Deleting ReplicationController externalsvc took: 6.532598ms Mar 23 00:28:46.017: INFO: Terminating ReplicationController externalsvc pods took: 100.25997ms Mar 23 00:28:53.041: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:28:53.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8206" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:17.723 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":177,"skipped":3030,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:28:53.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 23 00:28:53.145: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4701 /api/v1/namespaces/watch-4701/configmaps/e2e-watch-test-watch-closed a139e144-2f75-4464-9e88-fb9aff3dcd99 2020381 0 2020-03-23 00:28:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:28:53.145: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4701 /api/v1/namespaces/watch-4701/configmaps/e2e-watch-test-watch-closed a139e144-2f75-4464-9e88-fb9aff3dcd99 2020382 0 2020-03-23 00:28:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 23 00:28:53.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4701 /api/v1/namespaces/watch-4701/configmaps/e2e-watch-test-watch-closed a139e144-2f75-4464-9e88-fb9aff3dcd99 2020383 0 2020-03-23 00:28:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:28:53.175: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4701 /api/v1/namespaces/watch-4701/configmaps/e2e-watch-test-watch-closed a139e144-2f75-4464-9e88-fb9aff3dcd99 2020384 0 2020-03-23 00:28:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:28:53.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4701" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":178,"skipped":3042,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:28:53.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:28:53.284: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:28:53.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1770" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":179,"skipped":3073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:28:53.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6409 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6409 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6409 Mar 23 00:28:53.971: INFO: Found 0 stateful pods, waiting for 1 Mar 23 00:29:03.976: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 23 00:29:03.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6409 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 00:29:04.280: INFO: stderr: "I0323 00:29:04.112353 1956 log.go:172] (0xc000a28630) (0xc000928000) Create stream\nI0323 00:29:04.112406 1956 log.go:172] (0xc000a28630) (0xc000928000) Stream added, broadcasting: 1\nI0323 00:29:04.115203 1956 log.go:172] (0xc000a28630) Reply frame received for 1\nI0323 00:29:04.115243 1956 log.go:172] (0xc000a28630) (0xc0009280a0) Create stream\nI0323 00:29:04.115253 1956 log.go:172] (0xc000a28630) (0xc0009280a0) Stream added, broadcasting: 3\nI0323 00:29:04.116035 1956 log.go:172] (0xc000a28630) Reply frame received for 3\nI0323 00:29:04.116077 1956 log.go:172] (0xc000a28630) (0xc00060f220) Create stream\nI0323 00:29:04.116092 1956 log.go:172] (0xc000a28630) (0xc00060f220) Stream added, broadcasting: 5\nI0323 00:29:04.116881 1956 log.go:172] (0xc000a28630) Reply frame received for 5\nI0323 00:29:04.204565 1956 log.go:172] (0xc000a28630) Data frame received for 5\nI0323 00:29:04.204598 1956 log.go:172] (0xc00060f220) (5) Data frame handling\nI0323 00:29:04.204618 1956 log.go:172] (0xc00060f220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 00:29:04.274365 1956 log.go:172] (0xc000a28630) Data frame received for 3\nI0323 00:29:04.274412 1956 log.go:172] (0xc0009280a0) (3) Data frame handling\nI0323 00:29:04.274427 1956 log.go:172] (0xc0009280a0) (3) Data frame sent\nI0323 00:29:04.274440 1956 log.go:172] (0xc000a28630) Data frame received for 3\nI0323 00:29:04.274450 1956 log.go:172] (0xc0009280a0) (3) Data frame handling\nI0323 00:29:04.274475 1956 log.go:172] (0xc000a28630) Data frame received for 5\nI0323 00:29:04.274504 1956 log.go:172] (0xc00060f220) (5) Data frame handling\nI0323 00:29:04.276156 1956 log.go:172] (0xc000a28630) Data frame received for 1\nI0323 00:29:04.276191 1956 log.go:172] (0xc000928000) (1) Data frame handling\nI0323 00:29:04.276213 1956 log.go:172] (0xc000928000) (1) Data frame sent\nI0323 00:29:04.276237 1956 log.go:172] (0xc000a28630) (0xc000928000) Stream removed, broadcasting: 1\nI0323 00:29:04.276271 1956 log.go:172] (0xc000a28630) Go away received\nI0323 00:29:04.276688 1956 log.go:172] (0xc000a28630) (0xc000928000) Stream removed, broadcasting: 1\nI0323 00:29:04.276715 1956 log.go:172] (0xc000a28630) (0xc0009280a0) Stream removed, broadcasting: 3\nI0323 00:29:04.276729 1956 log.go:172] (0xc000a28630) (0xc00060f220) Stream removed, broadcasting: 5\n" Mar 23 00:29:04.281: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 00:29:04.281: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 00:29:04.285: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 23 00:29:14.288: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 23 00:29:14.288: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 00:29:14.299: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999428s Mar 23 00:29:15.302: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996946953s Mar 23 00:29:16.306: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.993621601s Mar 23 00:29:17.310: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98975818s Mar 23 00:29:18.314: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.985360315s Mar 23 00:29:19.319: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.981257871s Mar 23 00:29:20.323: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.976650682s Mar 23 00:29:21.328: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.972826822s Mar 23 00:29:22.341: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.968049922s Mar 23 00:29:23.346: INFO: Verifying statefulset ss doesn't scale past 1 for another 954.720367ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6409 Mar 23 00:29:24.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6409 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 00:29:24.578: INFO: stderr: "I0323 00:29:24.487160 1977 log.go:172] (0xc0009ec000) (0xc0009c0000) Create stream\nI0323 00:29:24.487217 1977 log.go:172] (0xc0009ec000) (0xc0009c0000) Stream added, broadcasting: 1\nI0323 00:29:24.497964 1977 log.go:172] (0xc0009ec000) Reply frame received for 1\nI0323 00:29:24.498024 1977 log.go:172] (0xc0009ec000) (0xc0006c7360) Create stream\nI0323 00:29:24.498039 1977 log.go:172] (0xc0009ec000) (0xc0006c7360) Stream added, broadcasting: 3\nI0323 00:29:24.498858 1977 log.go:172] (0xc0009ec000) Reply frame received for 3\nI0323 00:29:24.498898 1977 log.go:172] (0xc0009ec000) (0xc000344000) Create stream\nI0323 00:29:24.498915 1977 log.go:172] (0xc0009ec000) (0xc000344000) Stream added, broadcasting: 5\nI0323 00:29:24.499709 1977 log.go:172] (0xc0009ec000) Reply frame received for 5\nI0323 00:29:24.571950 1977 log.go:172] (0xc0009ec000) Data frame received for 3\nI0323 00:29:24.571991 1977 log.go:172] (0xc0006c7360) (3) Data frame handling\nI0323 00:29:24.572006 1977 log.go:172] (0xc0006c7360) (3) Data frame sent\nI0323 00:29:24.572071 1977 log.go:172] (0xc0009ec000) Data frame received for 5\nI0323 00:29:24.572118 1977 log.go:172] (0xc0009ec000) Data frame received for 3\nI0323 00:29:24.572276 1977 log.go:172] (0xc0006c7360) (3) Data frame handling\nI0323 00:29:24.572367 1977 log.go:172] (0xc000344000) (5) Data frame handling\nI0323 00:29:24.572405 1977 log.go:172] (0xc000344000) (5) Data frame sent\nI0323 00:29:24.572422 1977 log.go:172] (0xc0009ec000) Data frame received for 5\nI0323 00:29:24.572465 1977 log.go:172] (0xc000344000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 00:29:24.574345 1977 log.go:172] (0xc0009ec000) Data frame received for 1\nI0323 00:29:24.574368 1977 log.go:172] (0xc0009c0000) (1) Data frame handling\nI0323 00:29:24.574383 1977 log.go:172] (0xc0009c0000) (1) Data frame sent\nI0323 00:29:24.574399 1977 log.go:172] (0xc0009ec000) (0xc0009c0000) Stream removed, broadcasting: 1\nI0323 00:29:24.574414 1977 log.go:172] (0xc0009ec000) Go away received\nI0323 00:29:24.574939 1977 log.go:172] (0xc0009ec000) (0xc0009c0000) Stream removed, broadcasting: 1\nI0323 00:29:24.574964 1977 log.go:172] (0xc0009ec000) (0xc0006c7360) Stream removed, broadcasting: 3\nI0323 00:29:24.574978 1977 log.go:172] (0xc0009ec000) (0xc000344000) Stream removed, broadcasting: 5\n" Mar 23 00:29:24.579: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 00:29:24.579: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 00:29:24.583: INFO: Found 1 stateful pods, waiting for 3 Mar 23 00:29:34.588: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:29:34.588: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:29:34.588: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 23 00:29:34.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6409 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 00:29:34.803: INFO: stderr: "I0323 00:29:34.717047 1998 log.go:172] (0xc0000ec630) (0xc00041c280) Create stream\nI0323 00:29:34.717097 1998 log.go:172] (0xc0000ec630) (0xc00041c280) Stream added, broadcasting: 1\nI0323 00:29:34.719521 1998 log.go:172] (0xc0000ec630) Reply frame received for 1\nI0323 00:29:34.719572 1998 log.go:172] (0xc0000ec630) (0xc000184000) Create stream\nI0323 00:29:34.719592 1998 log.go:172] (0xc0000ec630) (0xc000184000) Stream added, broadcasting: 3\nI0323 00:29:34.720732 1998 log.go:172] (0xc0000ec630) Reply frame received for 3\nI0323 00:29:34.720792 1998 log.go:172] (0xc0000ec630) (0xc00069e000) Create stream\nI0323 00:29:34.720816 1998 log.go:172] (0xc0000ec630) (0xc00069e000) Stream added, broadcasting: 5\nI0323 00:29:34.722117 1998 log.go:172] (0xc0000ec630) Reply frame received for 5\nI0323 00:29:34.797874 1998 log.go:172] (0xc0000ec630) Data frame received for 5\nI0323 00:29:34.797904 1998 log.go:172] (0xc00069e000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 00:29:34.797931 1998 log.go:172] (0xc0000ec630) Data frame received for 3\nI0323 00:29:34.797962 1998 log.go:172] (0xc000184000) (3) Data frame handling\nI0323 00:29:34.797972 1998 log.go:172] (0xc000184000) (3) Data frame sent\nI0323 00:29:34.797986 1998 log.go:172] (0xc0000ec630) Data frame received for 3\nI0323 00:29:34.797993 1998 log.go:172] (0xc000184000) (3) Data frame handling\nI0323 00:29:34.798015 1998 log.go:172] (0xc00069e000) (5) Data frame sent\nI0323 00:29:34.798035 1998 log.go:172] (0xc0000ec630) Data frame received for 5\nI0323 00:29:34.798044 1998 log.go:172] (0xc00069e000) (5) Data frame handling\nI0323 00:29:34.799275 1998 log.go:172] (0xc0000ec630) Data frame received for 1\nI0323 00:29:34.799300 1998 log.go:172] (0xc00041c280) (1) Data frame handling\nI0323 00:29:34.799316 1998 log.go:172] (0xc00041c280) (1) Data frame sent\nI0323 00:29:34.799329 1998 log.go:172] (0xc0000ec630) (0xc00041c280) Stream removed, broadcasting: 1\nI0323 00:29:34.799343 1998 log.go:172] (0xc0000ec630) Go away received\nI0323 00:29:34.799658 1998 log.go:172] (0xc0000ec630) (0xc00041c280) Stream removed, broadcasting: 1\nI0323 00:29:34.799670 1998 log.go:172] (0xc0000ec630) (0xc000184000) Stream removed, broadcasting: 3\nI0323 00:29:34.799675 1998 log.go:172] (0xc0000ec630) (0xc00069e000) Stream removed, broadcasting: 5\n" Mar 23 00:29:34.803: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 00:29:34.803: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 00:29:34.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6409 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 00:29:35.026: INFO: stderr: "I0323 00:29:34.931044 2021 log.go:172] (0xc0007ac580) (0xc0006e7540) Create stream\nI0323 00:29:34.931095 2021 log.go:172] (0xc0007ac580) (0xc0006e7540) Stream added, broadcasting: 1\nI0323 00:29:34.933057 2021 log.go:172] (0xc0007ac580) Reply frame received for 1\nI0323 00:29:34.933081 2021 log.go:172] (0xc0007ac580) (0xc0006e75e0) Create stream\nI0323 00:29:34.933093 2021 log.go:172] (0xc0007ac580) (0xc0006e75e0) Stream added, broadcasting: 3\nI0323 00:29:34.934028 2021 log.go:172] (0xc0007ac580) Reply frame received for 3\nI0323 00:29:34.934069 2021 log.go:172] (0xc0007ac580) (0xc000a28000) Create stream\nI0323 00:29:34.934089 2021 log.go:172] (0xc0007ac580) (0xc000a28000) Stream added, broadcasting: 5\nI0323 00:29:34.934827 2021 log.go:172] (0xc0007ac580) Reply frame received for 5\nI0323 00:29:34.994641 2021 log.go:172] (0xc0007ac580) Data frame received for 5\nI0323 00:29:34.994690 2021 log.go:172] (0xc000a28000) (5) Data frame handling\nI0323 00:29:34.994727 2021 log.go:172] (0xc000a28000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 00:29:35.019537 2021 log.go:172] (0xc0007ac580) Data frame received for 3\nI0323 00:29:35.019671 2021 log.go:172] (0xc0006e75e0) (3) Data frame handling\nI0323 00:29:35.019789 2021 log.go:172] (0xc0006e75e0) (3) Data frame sent\nI0323 00:29:35.019822 2021 log.go:172] (0xc0007ac580) Data frame received for 3\nI0323 00:29:35.019840 2021 log.go:172] (0xc0006e75e0) (3) Data frame handling\nI0323 00:29:35.019869 2021 log.go:172] (0xc0007ac580) Data frame received for 5\nI0323 00:29:35.019893 2021 log.go:172] (0xc000a28000) (5) Data frame handling\nI0323 00:29:35.021992 2021 log.go:172] (0xc0007ac580) Data frame received for 1\nI0323 00:29:35.022013 2021 log.go:172] (0xc0006e7540) (1) Data frame handling\nI0323 00:29:35.022032 2021 log.go:172] (0xc0006e7540) (1) Data frame sent\nI0323 00:29:35.022042 2021 log.go:172] (0xc0007ac580) (0xc0006e7540) Stream removed, broadcasting: 1\nI0323 00:29:35.022058 2021 log.go:172] (0xc0007ac580) Go away received\nI0323 00:29:35.022535 2021 log.go:172] (0xc0007ac580) (0xc0006e7540) Stream removed, broadcasting: 1\nI0323 00:29:35.022569 2021 log.go:172] (0xc0007ac580) (0xc0006e75e0) Stream removed, broadcasting: 3\nI0323 00:29:35.022583 2021 log.go:172] (0xc0007ac580) (0xc000a28000) Stream removed, broadcasting: 5\n" Mar 23 00:29:35.026: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 00:29:35.026: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 00:29:35.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6409 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 00:29:35.272: INFO: stderr: "I0323 00:29:35.154307 2043 log.go:172] (0xc0003e0420) (0xc0006cd400) Create stream\nI0323 00:29:35.154409 2043 log.go:172] (0xc0003e0420) (0xc0006cd400) Stream added, broadcasting: 1\nI0323 00:29:35.156712 2043 log.go:172] (0xc0003e0420) Reply frame received for 1\nI0323 00:29:35.156761 2043 log.go:172] (0xc0003e0420) (0xc000994000) Create stream\nI0323 00:29:35.156775 2043 log.go:172] (0xc0003e0420) (0xc000994000) Stream added, broadcasting: 3\nI0323 00:29:35.158246 2043 log.go:172] (0xc0003e0420) Reply frame received for 3\nI0323 00:29:35.158314 2043 log.go:172] (0xc0003e0420) (0xc0004c2960) Create stream\nI0323 00:29:35.158344 2043 log.go:172] (0xc0003e0420) (0xc0004c2960) Stream added, broadcasting: 5\nI0323 00:29:35.159389 2043 log.go:172] (0xc0003e0420) Reply frame received for 5\nI0323 00:29:35.236641 2043 log.go:172] (0xc0003e0420) Data frame received for 5\nI0323 00:29:35.236675 2043 log.go:172] (0xc0004c2960) (5) Data frame handling\nI0323 00:29:35.236696 2043 log.go:172] (0xc0004c2960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 00:29:35.265438 2043 log.go:172] (0xc0003e0420) Data frame received for 3\nI0323 00:29:35.265471 2043 log.go:172] (0xc000994000) (3) Data frame handling\nI0323 00:29:35.265523 2043 log.go:172] (0xc000994000) (3) Data frame sent\nI0323 00:29:35.265552 2043 log.go:172] (0xc0003e0420) Data frame received for 3\nI0323 00:29:35.265570 2043 log.go:172] (0xc000994000) (3) Data frame handling\nI0323 00:29:35.265947 2043 log.go:172] (0xc0003e0420) Data frame received for 5\nI0323 00:29:35.265962 2043 log.go:172] (0xc0004c2960) (5) Data frame handling\nI0323 00:29:35.267640 2043 log.go:172] (0xc0003e0420) Data frame received for 1\nI0323 00:29:35.267660 2043 log.go:172] (0xc0006cd400) (1) Data frame handling\nI0323 00:29:35.267683 2043 log.go:172] (0xc0006cd400) (1) Data frame sent\nI0323 00:29:35.267703 2043 log.go:172] (0xc0003e0420) (0xc0006cd400) Stream removed, broadcasting: 1\nI0323 00:29:35.268005 2043 log.go:172] (0xc0003e0420) Go away received\nI0323 00:29:35.268115 2043 log.go:172] (0xc0003e0420) (0xc0006cd400) Stream removed, broadcasting: 1\nI0323 00:29:35.268146 2043 log.go:172] (0xc0003e0420) (0xc000994000) Stream removed, broadcasting: 3\nI0323 00:29:35.268160 2043 log.go:172] (0xc0003e0420) (0xc0004c2960) Stream removed, broadcasting: 5\n" Mar 23 00:29:35.272: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 00:29:35.272: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 00:29:35.272: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 00:29:35.274: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 23 00:29:45.289: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 23 00:29:45.289: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 23 00:29:45.289: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 23 00:29:45.302: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999382s Mar 23 00:29:46.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996145343s Mar 23 00:29:47.310: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991321398s Mar 23 00:29:48.315: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987553095s Mar 23 00:29:49.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983251555s Mar 23 00:29:50.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952158939s Mar 23 00:29:51.355: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.948032457s Mar 23 00:29:52.360: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.943244797s Mar 23 00:29:53.376: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.93826379s Mar 23 00:29:54.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 921.76464ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6409 Mar 23 00:29:55.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6409 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 00:29:55.581: INFO: stderr: "I0323 00:29:55.515385 2065 log.go:172] (0xc000ac46e0) (0xc0009f00a0) Create stream\nI0323 00:29:55.515436 2065 log.go:172] (0xc000ac46e0) (0xc0009f00a0) Stream added, broadcasting: 1\nI0323 00:29:55.518504 2065 log.go:172] (0xc000ac46e0) Reply frame received for 1\nI0323 00:29:55.518559 2065 log.go:172] (0xc000ac46e0) (0xc000701360) Create stream\nI0323 00:29:55.518573 2065 log.go:172] (0xc000ac46e0) (0xc000701360) Stream added, broadcasting: 3\nI0323 00:29:55.519616 2065 log.go:172] (0xc000ac46e0) Reply frame received for 3\nI0323 00:29:55.519670 2065 log.go:172] (0xc000ac46e0) (0xc000701540) Create stream\nI0323 00:29:55.519686 2065 log.go:172] (0xc000ac46e0) (0xc000701540) Stream added, broadcasting: 5\nI0323 00:29:55.520829 2065 log.go:172] (0xc000ac46e0) Reply frame received for 5\nI0323 00:29:55.574540 2065 log.go:172] (0xc000ac46e0) Data frame received for 3\nI0323 00:29:55.574569 2065 log.go:172] (0xc000701360) (3) Data frame handling\nI0323 00:29:55.574593 2065 log.go:172] (0xc000ac46e0) Data frame received for 5\nI0323 00:29:55.574622 2065 log.go:172] (0xc000701540) (5) Data frame handling\nI0323 00:29:55.574642 2065 log.go:172] (0xc000701540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 00:29:55.574673 2065 log.go:172] (0xc000701360) (3) Data frame sent\nI0323 00:29:55.574711 2065 log.go:172] (0xc000ac46e0) Data frame received for 3\nI0323 00:29:55.574736 2065 log.go:172] (0xc000701360) (3) Data frame handling\nI0323 00:29:55.574761 2065 log.go:172] (0xc000ac46e0) Data frame received for 5\nI0323 00:29:55.574772 2065 log.go:172] (0xc000701540) (5) Data frame handling\nI0323 00:29:55.576237 2065 log.go:172] (0xc000ac46e0) Data frame received for 1\nI0323 00:29:55.576249 2065 log.go:172] (0xc0009f00a0) (1) Data frame handling\nI0323 00:29:55.576256 2065 log.go:172] (0xc0009f00a0) (1) Data frame sent\nI0323 00:29:55.576267 2065 log.go:172] (0xc000ac46e0) (0xc0009f00a0) Stream removed, broadcasting: 1\nI0323 00:29:55.576364 2065 log.go:172] (0xc000ac46e0) Go away received\nI0323 00:29:55.576576 2065 log.go:172] (0xc000ac46e0) (0xc0009f00a0) Stream removed, broadcasting: 1\nI0323 00:29:55.576589 2065 log.go:172] (0xc000ac46e0) (0xc000701360) Stream removed, broadcasting: 3\nI0323 00:29:55.576595 2065 log.go:172] (0xc000ac46e0) (0xc000701540) Stream removed, broadcasting: 5\n" Mar 23 00:29:55.581: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 00:29:55.581: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 00:29:55.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6409 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 00:29:55.806: INFO: stderr: "I0323 00:29:55.721728 2087 log.go:172] (0xc000b24160) (0xc000560a00) Create stream\nI0323 00:29:55.721776 2087 log.go:172] (0xc000b24160) (0xc000560a00) Stream added, broadcasting: 1\nI0323 00:29:55.724332 2087 log.go:172] (0xc000b24160) Reply frame received for 1\nI0323 00:29:55.724382 2087 log.go:172] (0xc000b24160) (0xc0007d5180) Create stream\nI0323 00:29:55.724399 2087 log.go:172] (0xc000b24160) (0xc0007d5180) Stream added, broadcasting: 3\nI0323 00:29:55.725518 2087 log.go:172] (0xc000b24160) Reply frame received for 3\nI0323 00:29:55.725580 2087 log.go:172] (0xc000b24160) (0xc00067c000) Create stream\nI0323 00:29:55.725602 2087 log.go:172] (0xc000b24160) (0xc00067c000) Stream added, broadcasting: 5\nI0323 00:29:55.726744 2087 log.go:172] (0xc000b24160) Reply frame received for 5\nI0323 00:29:55.798987 2087 log.go:172] (0xc000b24160) Data frame received for 3\nI0323 00:29:55.799030 2087 log.go:172] (0xc0007d5180) (3) Data frame handling\nI0323 00:29:55.799053 2087 log.go:172] (0xc0007d5180) (3) Data frame sent\nI0323 00:29:55.799070 2087 log.go:172] (0xc000b24160) Data frame received for 3\nI0323 00:29:55.799087 2087 log.go:172] (0xc0007d5180) (3) Data frame handling\nI0323 00:29:55.799110 2087 log.go:172] (0xc000b24160) Data frame received for 5\nI0323 00:29:55.799129 2087 log.go:172] (0xc00067c000) (5) Data frame handling\nI0323 00:29:55.799152 2087 log.go:172] (0xc00067c000) (5) Data frame sent\nI0323 00:29:55.799171 2087 log.go:172] (0xc000b24160) Data frame received for 5\nI0323 00:29:55.799187 2087 log.go:172] (0xc00067c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 00:29:55.802147 2087 log.go:172] (0xc000b24160) Data frame received for 1\nI0323 00:29:55.802197 2087 log.go:172] (0xc000560a00) (1) Data frame handling\nI0323 00:29:55.802241 2087 log.go:172] (0xc000560a00) (1) Data frame sent\nI0323 00:29:55.802278 2087 log.go:172] (0xc000b24160) (0xc000560a00) Stream removed, broadcasting: 1\nI0323 00:29:55.802498 2087 log.go:172] (0xc000b24160) Go away received\nI0323 00:29:55.802840 2087 log.go:172] (0xc000b24160) (0xc000560a00) Stream removed, broadcasting: 1\nI0323 00:29:55.802881 2087 log.go:172] (0xc000b24160) (0xc0007d5180) Stream removed, broadcasting: 3\nI0323 00:29:55.802906 2087 log.go:172] (0xc000b24160) (0xc00067c000) Stream removed, broadcasting: 5\n" Mar 23 00:29:55.806: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 00:29:55.806: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 00:29:55.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6409 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 00:29:56.008: INFO: stderr: "I0323 00:29:55.938316 2108 log.go:172] (0xc000b43760) (0xc0008e8780) Create stream\nI0323 00:29:55.938364 2108 log.go:172] (0xc000b43760) (0xc0008e8780) Stream added, broadcasting: 1\nI0323 00:29:55.943014 2108 log.go:172] (0xc000b43760) Reply frame received for 1\nI0323 00:29:55.943067 2108 log.go:172] (0xc000b43760) (0xc000655540) Create stream\nI0323 00:29:55.943082 2108 log.go:172] (0xc000b43760) (0xc000655540) Stream added, broadcasting: 3\nI0323 00:29:55.944398 2108 log.go:172] (0xc000b43760) Reply frame received for 3\nI0323 00:29:55.944450 2108 log.go:172] (0xc000b43760) (0xc0002d8960) Create stream\nI0323 00:29:55.944465 2108 log.go:172] (0xc000b43760) (0xc0002d8960) Stream added, broadcasting: 5\nI0323 00:29:55.945775 2108 log.go:172] (0xc000b43760) Reply frame received for 5\nI0323 00:29:56.000929 2108 log.go:172] (0xc000b43760) Data frame received for 3\nI0323 00:29:56.000952 2108 log.go:172] (0xc000655540) (3) Data frame handling\nI0323 00:29:56.000965 2108 log.go:172] (0xc000655540) (3) Data frame sent\nI0323 00:29:56.000975 2108 log.go:172] (0xc000b43760) Data frame received for 3\nI0323 00:29:56.000983 2108 log.go:172] (0xc000655540) (3) Data frame handling\nI0323 00:29:56.001411 2108 log.go:172] (0xc000b43760) Data frame received for 5\nI0323 00:29:56.001435 2108 log.go:172] (0xc0002d8960) (5) Data frame handling\nI0323 00:29:56.001456 2108 log.go:172] (0xc0002d8960) (5) Data frame sent\nI0323 00:29:56.001470 2108 log.go:172] (0xc000b43760) Data frame received for 5\nI0323 00:29:56.001482 2108 log.go:172] (0xc0002d8960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 00:29:56.003226 2108 log.go:172] (0xc000b43760) Data frame received for 1\nI0323 00:29:56.003255 2108 log.go:172] (0xc0008e8780) (1) Data frame handling\nI0323 00:29:56.003278 2108 log.go:172] (0xc0008e8780) (1) Data frame sent\nI0323 00:29:56.003307 2108 log.go:172] (0xc000b43760) (0xc0008e8780) Stream removed, broadcasting: 1\nI0323 00:29:56.003328 2108 log.go:172] (0xc000b43760) Go away received\nI0323 00:29:56.003727 2108 log.go:172] (0xc000b43760) (0xc0008e8780) Stream removed, broadcasting: 1\nI0323 00:29:56.003752 2108 log.go:172] (0xc000b43760) (0xc000655540) Stream removed, broadcasting: 3\nI0323 00:29:56.003766 2108 log.go:172] (0xc000b43760) (0xc0002d8960) Stream removed, broadcasting: 5\n" Mar 23 00:29:56.008: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 00:29:56.008: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 00:29:56.008: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 23 00:30:16.024: INFO: Deleting all statefulset in ns statefulset-6409 Mar 23 00:30:16.027: INFO: Scaling statefulset ss to 0 Mar 23 00:30:16.036: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 00:30:16.039: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:30:16.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6409" for this suite. • [SLOW TEST:82.208 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":180,"skipped":3105,"failed":0} SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:30:16.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:30:16.158: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 11.737667ms) Mar 23 00:30:16.161: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.194506ms) Mar 23 00:30:16.164: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.826114ms) Mar 23 00:30:16.167: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.408432ms) Mar 23 00:30:16.170: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.294658ms) Mar 23 00:30:16.174: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.433058ms) Mar 23 00:30:16.177: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.47395ms) Mar 23 00:30:16.181: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.652154ms) Mar 23 00:30:16.184: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.358762ms) Mar 23 00:30:16.188: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.530374ms) Mar 23 00:30:16.209: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 21.189805ms) Mar 23 00:30:16.212: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.104152ms) Mar 23 00:30:16.218: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 5.493ms) Mar 23 00:30:16.223: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.832606ms) Mar 23 00:30:16.227: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.088928ms) Mar 23 00:30:16.231: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.77055ms) Mar 23 00:30:16.234: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.54662ms) Mar 23 00:30:16.238: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.883013ms) Mar 23 00:30:16.242: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.026119ms) Mar 23 00:30:16.246: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.703208ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:30:16.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1574" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":181,"skipped":3111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:30:16.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-1888 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1888 to expose endpoints map[] Mar 23 00:30:16.349: INFO: Get endpoints failed (5.954159ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 23 00:30:17.352: INFO: successfully validated that service endpoint-test2 in namespace services-1888 exposes endpoints map[] (1.009447598s elapsed) STEP: Creating pod pod1 in namespace services-1888 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1888 to expose endpoints map[pod1:[80]] Mar 23 00:30:20.475: INFO: successfully validated that service endpoint-test2 in namespace services-1888 exposes endpoints map[pod1:[80]] (3.114736478s elapsed) STEP: Creating pod pod2 in namespace services-1888 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1888 to expose endpoints map[pod1:[80] pod2:[80]] Mar 23 00:30:24.574: INFO: successfully validated that service endpoint-test2 in namespace services-1888 exposes endpoints map[pod1:[80] pod2:[80]] (4.095713345s elapsed) STEP: Deleting pod pod1 in namespace services-1888 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1888 to expose endpoints map[pod2:[80]] Mar 23 00:30:25.649: INFO: successfully validated that service endpoint-test2 in namespace services-1888 exposes endpoints map[pod2:[80]] (1.069742439s elapsed) STEP: Deleting pod pod2 in namespace services-1888 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1888 to expose endpoints map[] Mar 23 00:30:26.664: INFO: successfully validated that service endpoint-test2 in namespace services-1888 exposes endpoints map[] (1.009834578s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:30:26.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1888" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.458 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":182,"skipped":3164,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:30:26.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:30:32.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4598" for this suite. STEP: Destroying namespace "nsdeletetest-7805" for this suite. Mar 23 00:30:32.914: INFO: Namespace nsdeletetest-7805 was already deleted STEP: Destroying namespace "nsdeletetest-9110" for this suite. • [SLOW TEST:6.201 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":183,"skipped":3166,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:30:32.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:30:37.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2235" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":184,"skipped":3179,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:30:37.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:30:37.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-247" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3198,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:30:37.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 23 00:30:37.687: INFO: Waiting up to 5m0s for pod "pod-62a1977a-b269-407d-a577-896d99d19402" in namespace "emptydir-845" to be "Succeeded or Failed" Mar 23 00:30:37.690: INFO: Pod "pod-62a1977a-b269-407d-a577-896d99d19402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.595634ms Mar 23 00:30:39.717: INFO: Pod "pod-62a1977a-b269-407d-a577-896d99d19402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03001734s Mar 23 00:30:41.721: INFO: Pod "pod-62a1977a-b269-407d-a577-896d99d19402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033871438s STEP: Saw pod success Mar 23 00:30:41.721: INFO: Pod "pod-62a1977a-b269-407d-a577-896d99d19402" satisfied condition "Succeeded or Failed" Mar 23 00:30:41.724: INFO: Trying to get logs from node latest-worker2 pod pod-62a1977a-b269-407d-a577-896d99d19402 container test-container: STEP: delete the pod Mar 23 00:30:41.745: INFO: Waiting for pod pod-62a1977a-b269-407d-a577-896d99d19402 to disappear Mar 23 00:30:41.765: INFO: Pod pod-62a1977a-b269-407d-a577-896d99d19402 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:30:41.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-845" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3201,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:30:41.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-c019336e-4990-4b43-a95d-2189c8829de5 in namespace container-probe-6769 Mar 23 00:30:46.070: INFO: Started pod liveness-c019336e-4990-4b43-a95d-2189c8829de5 in namespace container-probe-6769 STEP: checking the pod's current state and verifying that restartCount is present Mar 23 00:30:46.073: INFO: Initial restart count of pod liveness-c019336e-4990-4b43-a95d-2189c8829de5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:34:46.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6769" for this suite. • [SLOW TEST:244.931 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:34:46.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Mar 23 00:34:46.747: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix093009599/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:34:46.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6887" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":188,"skipped":3248,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:34:46.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:34:47.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9330" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":189,"skipped":3261,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:34:47.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-9b774e46-5060-44d0-b35c-79b161881b29 STEP: Creating a pod to test consume secrets Mar 23 00:34:47.392: INFO: Waiting up to 5m0s for pod "pod-secrets-586046bd-dc5c-46c2-9289-0f5e53dc1bf7" in namespace "secrets-4493" to be "Succeeded or Failed" Mar 23 00:34:47.413: INFO: Pod "pod-secrets-586046bd-dc5c-46c2-9289-0f5e53dc1bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.830778ms Mar 23 00:34:49.418: INFO: Pod "pod-secrets-586046bd-dc5c-46c2-9289-0f5e53dc1bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025398098s Mar 23 00:34:51.422: INFO: Pod "pod-secrets-586046bd-dc5c-46c2-9289-0f5e53dc1bf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029688853s STEP: Saw pod success Mar 23 00:34:51.422: INFO: Pod "pod-secrets-586046bd-dc5c-46c2-9289-0f5e53dc1bf7" satisfied condition "Succeeded or Failed" Mar 23 00:34:51.425: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-586046bd-dc5c-46c2-9289-0f5e53dc1bf7 container secret-volume-test: STEP: delete the pod Mar 23 00:34:51.484: INFO: Waiting for pod pod-secrets-586046bd-dc5c-46c2-9289-0f5e53dc1bf7 to disappear Mar 23 00:34:51.496: INFO: Pod pod-secrets-586046bd-dc5c-46c2-9289-0f5e53dc1bf7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:34:51.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4493" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3272,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:34:51.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-881 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 23 00:34:51.598: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 23 00:34:51.647: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 23 00:34:53.806: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 23 00:34:55.651: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:34:57.651: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:34:59.651: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:35:01.651: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:35:03.651: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:35:05.652: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:35:07.652: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:35:09.652: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:35:11.652: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:35:13.651: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 23 00:35:13.658: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 23 00:35:17.717: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-881 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 00:35:17.717: INFO: >>> kubeConfig: /root/.kube/config I0323 00:35:17.755873 7 log.go:172] (0xc002460580) (0xc000b5b860) Create stream I0323 00:35:17.755921 7 log.go:172] (0xc002460580) (0xc000b5b860) Stream added, broadcasting: 1 I0323 00:35:17.761656 7 log.go:172] (0xc002460580) Reply frame received for 1 I0323 00:35:17.761733 7 log.go:172] (0xc002460580) (0xc000b5bae0) Create stream I0323 00:35:17.761779 7 log.go:172] (0xc002460580) (0xc000b5bae0) Stream added, broadcasting: 3 I0323 00:35:17.764563 7 log.go:172] (0xc002460580) Reply frame received for 3 I0323 00:35:17.764595 7 log.go:172] (0xc002460580) (0xc000fa68c0) Create stream I0323 00:35:17.764609 7 log.go:172] (0xc002460580) (0xc000fa68c0) Stream added, broadcasting: 5 I0323 00:35:17.765462 7 log.go:172] (0xc002460580) Reply frame received for 5 I0323 00:35:18.825683 7 log.go:172] (0xc002460580) Data frame received for 5 I0323 00:35:18.825825 7 log.go:172] (0xc000fa68c0) (5) Data frame handling I0323 00:35:18.825876 7 log.go:172] (0xc002460580) Data frame received for 3 I0323 00:35:18.825965 7 log.go:172] (0xc000b5bae0) (3) Data frame handling I0323 00:35:18.826013 7 log.go:172] (0xc000b5bae0) (3) Data frame sent I0323 00:35:18.826038 7 log.go:172] (0xc002460580) Data frame received for 3 I0323 00:35:18.826059 7 log.go:172] (0xc000b5bae0) (3) Data frame handling I0323 00:35:18.827975 7 log.go:172] (0xc002460580) Data frame received for 1 I0323 00:35:18.828012 7 log.go:172] (0xc000b5b860) (1) Data frame handling I0323 00:35:18.828045 7 log.go:172] (0xc000b5b860) (1) Data frame sent I0323 00:35:18.828067 7 log.go:172] (0xc002460580) (0xc000b5b860) Stream removed, broadcasting: 1 I0323 00:35:18.828086 7 log.go:172] (0xc002460580) Go away received I0323 00:35:18.828338 7 log.go:172] (0xc002460580) (0xc000b5b860) Stream removed, broadcasting: 1 I0323 00:35:18.828359 7 log.go:172] (0xc002460580) (0xc000b5bae0) Stream removed, broadcasting: 3 I0323 00:35:18.828370 7 log.go:172] (0xc002460580) (0xc000fa68c0) Stream removed, broadcasting: 5 Mar 23 00:35:18.828: INFO: Found all expected endpoints: [netserver-0] Mar 23 00:35:18.832: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.140 8081 | grep -v '^\s*$'] Namespace:pod-network-test-881 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 00:35:18.832: INFO: >>> kubeConfig: /root/.kube/config I0323 00:35:18.871912 7 log.go:172] (0xc002460c60) (0xc002622140) Create stream I0323 00:35:18.871939 7 log.go:172] (0xc002460c60) (0xc002622140) Stream added, broadcasting: 1 I0323 00:35:18.874049 7 log.go:172] (0xc002460c60) Reply frame received for 1 I0323 00:35:18.874101 7 log.go:172] (0xc002460c60) (0xc002352f00) Create stream I0323 00:35:18.874118 7 log.go:172] (0xc002460c60) (0xc002352f00) Stream added, broadcasting: 3 I0323 00:35:18.875105 7 log.go:172] (0xc002460c60) Reply frame received for 3 I0323 00:35:18.875142 7 log.go:172] (0xc002460c60) (0xc0026221e0) Create stream I0323 00:35:18.875157 7 log.go:172] (0xc002460c60) (0xc0026221e0) Stream added, broadcasting: 5 I0323 00:35:18.876050 7 log.go:172] (0xc002460c60) Reply frame received for 5 I0323 00:35:19.953436 7 log.go:172] (0xc002460c60) Data frame received for 3 I0323 00:35:19.953465 7 log.go:172] (0xc002352f00) (3) Data frame handling I0323 00:35:19.953487 7 log.go:172] (0xc002352f00) (3) Data frame sent I0323 00:35:19.954015 7 log.go:172] (0xc002460c60) Data frame received for 5 I0323 00:35:19.954044 7 log.go:172] (0xc0026221e0) (5) Data frame handling I0323 00:35:19.954073 7 log.go:172] (0xc002460c60) Data frame received for 3 I0323 00:35:19.954090 7 log.go:172] (0xc002352f00) (3) Data frame handling I0323 00:35:19.955949 7 log.go:172] (0xc002460c60) Data frame received for 1 I0323 00:35:19.955971 7 log.go:172] (0xc002622140) (1) Data frame handling I0323 00:35:19.956001 7 log.go:172] (0xc002622140) (1) Data frame sent I0323 00:35:19.956019 7 log.go:172] (0xc002460c60) (0xc002622140) Stream removed, broadcasting: 1 I0323 00:35:19.956036 7 log.go:172] (0xc002460c60) Go away received I0323 00:35:19.956171 7 log.go:172] (0xc002460c60) (0xc002622140) Stream removed, broadcasting: 1 I0323 00:35:19.956210 7 log.go:172] (0xc002460c60) (0xc002352f00) Stream removed, broadcasting: 3 I0323 00:35:19.956237 7 log.go:172] (0xc002460c60) (0xc0026221e0) Stream removed, broadcasting: 5 Mar 23 00:35:19.956: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:35:19.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-881" for this suite. • [SLOW TEST:28.464 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3278,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:35:19.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 23 00:35:20.022: INFO: Waiting up to 5m0s for pod "pod-6c20e442-2a68-4f77-80ba-916dfd99e42a" in namespace "emptydir-6772" to be "Succeeded or Failed" Mar 23 00:35:20.039: INFO: Pod "pod-6c20e442-2a68-4f77-80ba-916dfd99e42a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.61669ms Mar 23 00:35:22.042: INFO: Pod "pod-6c20e442-2a68-4f77-80ba-916dfd99e42a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020164136s Mar 23 00:35:24.046: INFO: Pod "pod-6c20e442-2a68-4f77-80ba-916dfd99e42a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024070329s STEP: Saw pod success Mar 23 00:35:24.046: INFO: Pod "pod-6c20e442-2a68-4f77-80ba-916dfd99e42a" satisfied condition "Succeeded or Failed" Mar 23 00:35:24.049: INFO: Trying to get logs from node latest-worker pod pod-6c20e442-2a68-4f77-80ba-916dfd99e42a container test-container: STEP: delete the pod Mar 23 00:35:24.091: INFO: Waiting for pod pod-6c20e442-2a68-4f77-80ba-916dfd99e42a to disappear Mar 23 00:35:24.101: INFO: Pod pod-6c20e442-2a68-4f77-80ba-916dfd99e42a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:35:24.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6772" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3283,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:35:24.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-9931/configmap-test-a88355b7-ea3c-4327-9442-8071b46cb557 STEP: Creating a pod to test consume configMaps Mar 23 00:35:24.205: INFO: Waiting up to 5m0s for pod "pod-configmaps-4096dd50-12da-4f45-bc5b-f23903e9bcc2" in namespace "configmap-9931" to be "Succeeded or Failed" Mar 23 00:35:24.209: INFO: Pod "pod-configmaps-4096dd50-12da-4f45-bc5b-f23903e9bcc2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.551971ms Mar 23 00:35:26.309: INFO: Pod "pod-configmaps-4096dd50-12da-4f45-bc5b-f23903e9bcc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10378414s Mar 23 00:35:28.312: INFO: Pod "pod-configmaps-4096dd50-12da-4f45-bc5b-f23903e9bcc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106956661s STEP: Saw pod success Mar 23 00:35:28.312: INFO: Pod "pod-configmaps-4096dd50-12da-4f45-bc5b-f23903e9bcc2" satisfied condition "Succeeded or Failed" Mar 23 00:35:28.315: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-4096dd50-12da-4f45-bc5b-f23903e9bcc2 container env-test: STEP: delete the pod Mar 23 00:35:28.479: INFO: Waiting for pod pod-configmaps-4096dd50-12da-4f45-bc5b-f23903e9bcc2 to disappear Mar 23 00:35:28.502: INFO: Pod pod-configmaps-4096dd50-12da-4f45-bc5b-f23903e9bcc2 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:35:28.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9931" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:35:28.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Mar 23 00:35:29.098: INFO: created pod pod-service-account-defaultsa Mar 23 00:35:29.098: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 23 00:35:29.102: INFO: created pod pod-service-account-mountsa Mar 23 00:35:29.102: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 23 00:35:29.141: INFO: created pod pod-service-account-nomountsa Mar 23 00:35:29.141: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 23 00:35:29.174: INFO: created pod pod-service-account-defaultsa-mountspec Mar 23 00:35:29.174: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 23 00:35:29.189: INFO: created pod pod-service-account-mountsa-mountspec Mar 23 00:35:29.189: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 23 00:35:29.212: INFO: created pod pod-service-account-nomountsa-mountspec Mar 23 00:35:29.212: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 23 00:35:29.298: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 23 00:35:29.298: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 23 00:35:29.316: INFO: created pod pod-service-account-mountsa-nomountspec Mar 23 00:35:29.316: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 23 00:35:29.337: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 23 00:35:29.338: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:35:29.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2100" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":194,"skipped":3395,"failed":0} SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:35:29.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 23 00:35:42.654: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:35:43.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-484" for this suite. • [SLOW TEST:14.197 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":195,"skipped":3397,"failed":0} S ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:35:43.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 23 00:35:48.546: INFO: Successfully updated pod "adopt-release-8jx8l" STEP: Checking that the Job readopts the Pod Mar 23 00:35:48.546: INFO: Waiting up to 15m0s for pod "adopt-release-8jx8l" in namespace "job-2927" to be "adopted" Mar 23 00:35:48.566: INFO: Pod "adopt-release-8jx8l": Phase="Running", Reason="", readiness=true. Elapsed: 20.56256ms Mar 23 00:35:50.571: INFO: Pod "adopt-release-8jx8l": Phase="Running", Reason="", readiness=true. Elapsed: 2.024887256s Mar 23 00:35:50.571: INFO: Pod "adopt-release-8jx8l" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 23 00:35:51.080: INFO: Successfully updated pod "adopt-release-8jx8l" STEP: Checking that the Job releases the Pod Mar 23 00:35:51.081: INFO: Waiting up to 15m0s for pod "adopt-release-8jx8l" in namespace "job-2927" to be "released" Mar 23 00:35:51.087: INFO: Pod "adopt-release-8jx8l": Phase="Running", Reason="", readiness=true. Elapsed: 6.588505ms Mar 23 00:35:53.091: INFO: Pod "adopt-release-8jx8l": Phase="Running", Reason="", readiness=true. Elapsed: 2.010347165s Mar 23 00:35:53.091: INFO: Pod "adopt-release-8jx8l" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:35:53.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2927" for this suite. • [SLOW TEST:9.421 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":196,"skipped":3398,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:35:53.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 23 00:35:53.194: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:35:59.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4728" for this suite. • [SLOW TEST:6.307 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":197,"skipped":3411,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:35:59.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 23 00:36:04.027: INFO: Successfully updated pod "labelsupdatef4c2d4fb-3047-44db-87cd-808e2b14a71f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:36:06.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7438" for this suite. • [SLOW TEST:6.646 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:36:06.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Mar 23 00:36:06.145: INFO: Waiting up to 5m0s for pod "client-containers-f34bc7fd-6111-478f-9ced-6e2784961684" in namespace "containers-863" to be "Succeeded or Failed" Mar 23 00:36:06.147: INFO: Pod "client-containers-f34bc7fd-6111-478f-9ced-6e2784961684": Phase="Pending", Reason="", readiness=false. Elapsed: 1.916046ms Mar 23 00:36:08.153: INFO: Pod "client-containers-f34bc7fd-6111-478f-9ced-6e2784961684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007967761s Mar 23 00:36:10.157: INFO: Pod "client-containers-f34bc7fd-6111-478f-9ced-6e2784961684": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011884582s STEP: Saw pod success Mar 23 00:36:10.157: INFO: Pod "client-containers-f34bc7fd-6111-478f-9ced-6e2784961684" satisfied condition "Succeeded or Failed" Mar 23 00:36:10.160: INFO: Trying to get logs from node latest-worker2 pod client-containers-f34bc7fd-6111-478f-9ced-6e2784961684 container test-container: STEP: delete the pod Mar 23 00:36:10.190: INFO: Waiting for pod client-containers-f34bc7fd-6111-478f-9ced-6e2784961684 to disappear Mar 23 00:36:10.201: INFO: Pod client-containers-f34bc7fd-6111-478f-9ced-6e2784961684 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:36:10.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-863" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3463,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:36:10.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-2822 STEP: creating replication controller nodeport-test in namespace services-2822 I0323 00:36:10.337594 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-2822, replica count: 2 I0323 00:36:13.388173 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 00:36:16.388390 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 23 00:36:16.388: INFO: Creating new exec pod Mar 23 00:36:21.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2822 execpodpmln9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 23 00:36:21.632: INFO: stderr: "I0323 00:36:21.542589 2148 log.go:172] (0xc0009ca0b0) (0xc0002e3c20) Create stream\nI0323 00:36:21.542634 2148 log.go:172] (0xc0009ca0b0) (0xc0002e3c20) Stream added, broadcasting: 1\nI0323 00:36:21.544906 2148 log.go:172] (0xc0009ca0b0) Reply frame received for 1\nI0323 00:36:21.544948 2148 log.go:172] (0xc0009ca0b0) (0xc000910000) Create stream\nI0323 00:36:21.544961 2148 log.go:172] (0xc0009ca0b0) (0xc000910000) Stream added, broadcasting: 3\nI0323 00:36:21.545960 2148 log.go:172] (0xc0009ca0b0) Reply frame received for 3\nI0323 00:36:21.546006 2148 log.go:172] (0xc0009ca0b0) (0xc000970000) Create stream\nI0323 00:36:21.546022 2148 log.go:172] (0xc0009ca0b0) (0xc000970000) Stream added, broadcasting: 5\nI0323 00:36:21.546994 2148 log.go:172] (0xc0009ca0b0) Reply frame received for 5\nI0323 00:36:21.624045 2148 log.go:172] (0xc0009ca0b0) Data frame received for 5\nI0323 00:36:21.624074 2148 log.go:172] (0xc000970000) (5) Data frame handling\nI0323 00:36:21.624096 2148 log.go:172] (0xc000970000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0323 00:36:21.624767 2148 log.go:172] (0xc0009ca0b0) Data frame received for 5\nI0323 00:36:21.624813 2148 log.go:172] (0xc000970000) (5) Data frame handling\nI0323 00:36:21.624845 2148 log.go:172] (0xc000970000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0323 00:36:21.624993 2148 log.go:172] (0xc0009ca0b0) Data frame received for 3\nI0323 00:36:21.625016 2148 log.go:172] (0xc000910000) (3) Data frame handling\nI0323 00:36:21.625693 2148 log.go:172] (0xc0009ca0b0) Data frame received for 5\nI0323 00:36:21.625715 2148 log.go:172] (0xc000970000) (5) Data frame handling\nI0323 00:36:21.627425 2148 log.go:172] (0xc0009ca0b0) Data frame received for 1\nI0323 00:36:21.627463 2148 log.go:172] (0xc0002e3c20) (1) Data frame handling\nI0323 00:36:21.627494 2148 log.go:172] (0xc0002e3c20) (1) Data frame sent\nI0323 00:36:21.627521 2148 log.go:172] (0xc0009ca0b0) (0xc0002e3c20) Stream removed, broadcasting: 1\nI0323 00:36:21.627566 2148 log.go:172] (0xc0009ca0b0) Go away received\nI0323 00:36:21.627856 2148 log.go:172] (0xc0009ca0b0) (0xc0002e3c20) Stream removed, broadcasting: 1\nI0323 00:36:21.627877 2148 log.go:172] (0xc0009ca0b0) (0xc000910000) Stream removed, broadcasting: 3\nI0323 00:36:21.627886 2148 log.go:172] (0xc0009ca0b0) (0xc000970000) Stream removed, broadcasting: 5\n" Mar 23 00:36:21.632: INFO: stdout: "" Mar 23 00:36:21.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2822 execpodpmln9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.242.148 80' Mar 23 00:36:21.822: INFO: stderr: "I0323 00:36:21.762889 2170 log.go:172] (0xc0005040b0) (0xc00062d540) Create stream\nI0323 00:36:21.762965 2170 log.go:172] (0xc0005040b0) (0xc00062d540) Stream added, broadcasting: 1\nI0323 00:36:21.766057 2170 log.go:172] (0xc0005040b0) Reply frame received for 1\nI0323 00:36:21.766124 2170 log.go:172] (0xc0005040b0) (0xc00076e000) Create stream\nI0323 00:36:21.766145 2170 log.go:172] (0xc0005040b0) (0xc00076e000) Stream added, broadcasting: 3\nI0323 00:36:21.767623 2170 log.go:172] (0xc0005040b0) Reply frame received for 3\nI0323 00:36:21.767660 2170 log.go:172] (0xc0005040b0) (0xc00062d5e0) Create stream\nI0323 00:36:21.767677 2170 log.go:172] (0xc0005040b0) (0xc00062d5e0) Stream added, broadcasting: 5\nI0323 00:36:21.768578 2170 log.go:172] (0xc0005040b0) Reply frame received for 5\nI0323 00:36:21.816694 2170 log.go:172] (0xc0005040b0) Data frame received for 5\nI0323 00:36:21.816730 2170 log.go:172] (0xc00062d5e0) (5) Data frame handling\nI0323 00:36:21.816748 2170 log.go:172] (0xc00062d5e0) (5) Data frame sent\nI0323 00:36:21.816761 2170 log.go:172] (0xc0005040b0) Data frame received for 5\nI0323 00:36:21.816768 2170 log.go:172] (0xc00062d5e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.242.148 80\nConnection to 10.96.242.148 80 port [tcp/http] succeeded!\nI0323 00:36:21.816876 2170 log.go:172] (0xc00062d5e0) (5) Data frame sent\nI0323 00:36:21.817068 2170 log.go:172] (0xc0005040b0) Data frame received for 3\nI0323 00:36:21.817091 2170 log.go:172] (0xc00076e000) (3) Data frame handling\nI0323 00:36:21.817279 2170 log.go:172] (0xc0005040b0) Data frame received for 5\nI0323 00:36:21.817290 2170 log.go:172] (0xc00062d5e0) (5) Data frame handling\nI0323 00:36:21.818465 2170 log.go:172] (0xc0005040b0) Data frame received for 1\nI0323 00:36:21.818496 2170 log.go:172] (0xc00062d540) (1) Data frame handling\nI0323 00:36:21.818507 2170 log.go:172] (0xc00062d540) (1) Data frame sent\nI0323 00:36:21.818529 2170 log.go:172] (0xc0005040b0) (0xc00062d540) Stream removed, broadcasting: 1\nI0323 00:36:21.818576 2170 log.go:172] (0xc0005040b0) Go away received\nI0323 00:36:21.818861 2170 log.go:172] (0xc0005040b0) (0xc00062d540) Stream removed, broadcasting: 1\nI0323 00:36:21.818876 2170 log.go:172] (0xc0005040b0) (0xc00076e000) Stream removed, broadcasting: 3\nI0323 00:36:21.818882 2170 log.go:172] (0xc0005040b0) (0xc00062d5e0) Stream removed, broadcasting: 5\n" Mar 23 00:36:21.822: INFO: stdout: "" Mar 23 00:36:21.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2822 execpodpmln9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30105' Mar 23 00:36:22.022: INFO: stderr: "I0323 00:36:21.956951 2192 log.go:172] (0xc000cc4dc0) (0xc000a745a0) Create stream\nI0323 00:36:21.957016 2192 log.go:172] (0xc000cc4dc0) (0xc000a745a0) Stream added, broadcasting: 1\nI0323 00:36:21.964994 2192 log.go:172] (0xc000cc4dc0) Reply frame received for 1\nI0323 00:36:21.965034 2192 log.go:172] (0xc000cc4dc0) (0xc000b44320) Create stream\nI0323 00:36:21.965043 2192 log.go:172] (0xc000cc4dc0) (0xc000b44320) Stream added, broadcasting: 3\nI0323 00:36:21.966230 2192 log.go:172] (0xc000cc4dc0) Reply frame received for 3\nI0323 00:36:21.966280 2192 log.go:172] (0xc000cc4dc0) (0xc000a74640) Create stream\nI0323 00:36:21.966295 2192 log.go:172] (0xc000cc4dc0) (0xc000a74640) Stream added, broadcasting: 5\nI0323 00:36:21.967168 2192 log.go:172] (0xc000cc4dc0) Reply frame received for 5\nI0323 00:36:22.016520 2192 log.go:172] (0xc000cc4dc0) Data frame received for 3\nI0323 00:36:22.016562 2192 log.go:172] (0xc000b44320) (3) Data frame handling\nI0323 00:36:22.016616 2192 log.go:172] (0xc000cc4dc0) Data frame received for 5\nI0323 00:36:22.016647 2192 log.go:172] (0xc000a74640) (5) Data frame handling\nI0323 00:36:22.016659 2192 log.go:172] (0xc000a74640) (5) Data frame sent\nI0323 00:36:22.016670 2192 log.go:172] (0xc000cc4dc0) Data frame received for 5\nI0323 00:36:22.016678 2192 log.go:172] (0xc000a74640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30105\nConnection to 172.17.0.13 30105 port [tcp/30105] succeeded!\nI0323 00:36:22.018342 2192 log.go:172] (0xc000cc4dc0) Data frame received for 1\nI0323 00:36:22.018374 2192 log.go:172] (0xc000a745a0) (1) Data frame handling\nI0323 00:36:22.018391 2192 log.go:172] (0xc000a745a0) (1) Data frame sent\nI0323 00:36:22.018420 2192 log.go:172] (0xc000cc4dc0) (0xc000a745a0) Stream removed, broadcasting: 1\nI0323 00:36:22.018441 2192 log.go:172] (0xc000cc4dc0) Go away received\nI0323 00:36:22.019206 2192 log.go:172] (0xc000cc4dc0) (0xc000a745a0) Stream removed, broadcasting: 1\nI0323 00:36:22.019222 2192 log.go:172] (0xc000cc4dc0) (0xc000b44320) Stream removed, broadcasting: 3\nI0323 00:36:22.019234 2192 log.go:172] (0xc000cc4dc0) (0xc000a74640) Stream removed, broadcasting: 5\n" Mar 23 00:36:22.022: INFO: stdout: "" Mar 23 00:36:22.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2822 execpodpmln9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30105' Mar 23 00:36:22.219: INFO: stderr: "I0323 00:36:22.154012 2212 log.go:172] (0xc00050a8f0) (0xc00039aaa0) Create stream\nI0323 00:36:22.154099 2212 log.go:172] (0xc00050a8f0) (0xc00039aaa0) Stream added, broadcasting: 1\nI0323 00:36:22.156930 2212 log.go:172] (0xc00050a8f0) Reply frame received for 1\nI0323 00:36:22.156975 2212 log.go:172] (0xc00050a8f0) (0xc0006f5220) Create stream\nI0323 00:36:22.156989 2212 log.go:172] (0xc00050a8f0) (0xc0006f5220) Stream added, broadcasting: 3\nI0323 00:36:22.158135 2212 log.go:172] (0xc00050a8f0) Reply frame received for 3\nI0323 00:36:22.158167 2212 log.go:172] (0xc00050a8f0) (0xc0003d2000) Create stream\nI0323 00:36:22.158177 2212 log.go:172] (0xc00050a8f0) (0xc0003d2000) Stream added, broadcasting: 5\nI0323 00:36:22.159090 2212 log.go:172] (0xc00050a8f0) Reply frame received for 5\nI0323 00:36:22.213786 2212 log.go:172] (0xc00050a8f0) Data frame received for 5\nI0323 00:36:22.213823 2212 log.go:172] (0xc0003d2000) (5) Data frame handling\nI0323 00:36:22.213848 2212 log.go:172] (0xc0003d2000) (5) Data frame sent\nI0323 00:36:22.213862 2212 log.go:172] (0xc00050a8f0) Data frame received for 5\nI0323 00:36:22.213872 2212 log.go:172] (0xc0003d2000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30105\nConnection to 172.17.0.12 30105 port [tcp/30105] succeeded!\nI0323 00:36:22.213897 2212 log.go:172] (0xc0003d2000) (5) Data frame sent\nI0323 00:36:22.214689 2212 log.go:172] (0xc00050a8f0) Data frame received for 5\nI0323 00:36:22.214718 2212 log.go:172] (0xc0003d2000) (5) Data frame handling\nI0323 00:36:22.214735 2212 log.go:172] (0xc00050a8f0) Data frame received for 3\nI0323 00:36:22.214744 2212 log.go:172] (0xc0006f5220) (3) Data frame handling\nI0323 00:36:22.215621 2212 log.go:172] (0xc00050a8f0) Data frame received for 1\nI0323 00:36:22.215638 2212 log.go:172] (0xc00039aaa0) (1) Data frame handling\nI0323 00:36:22.215646 2212 log.go:172] (0xc00039aaa0) (1) Data frame sent\nI0323 00:36:22.215657 2212 log.go:172] (0xc00050a8f0) (0xc00039aaa0) Stream removed, broadcasting: 1\nI0323 00:36:22.215691 2212 log.go:172] (0xc00050a8f0) Go away received\nI0323 00:36:22.216051 2212 log.go:172] (0xc00050a8f0) (0xc00039aaa0) Stream removed, broadcasting: 1\nI0323 00:36:22.216071 2212 log.go:172] (0xc00050a8f0) (0xc0006f5220) Stream removed, broadcasting: 3\nI0323 00:36:22.216082 2212 log.go:172] (0xc00050a8f0) (0xc0003d2000) Stream removed, broadcasting: 5\n" Mar 23 00:36:22.219: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:36:22.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2822" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.018 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":200,"skipped":3465,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:36:22.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 23 00:36:26.867: INFO: Successfully updated pod "annotationupdated6826ea4-1104-41b9-a517-e1f1856b15c4" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:36:28.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9499" for this suite. • [SLOW TEST:6.824 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3475,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:36:29.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 23 00:36:29.222: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7380 /api/v1/namespaces/watch-7380/configmaps/e2e-watch-test-label-changed ae5f3ba8-012a-4e62-99eb-33176ce9e227 2022698 0 2020-03-23 00:36:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:36:29.223: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7380 /api/v1/namespaces/watch-7380/configmaps/e2e-watch-test-label-changed ae5f3ba8-012a-4e62-99eb-33176ce9e227 2022699 0 2020-03-23 00:36:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:36:29.223: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7380 /api/v1/namespaces/watch-7380/configmaps/e2e-watch-test-label-changed ae5f3ba8-012a-4e62-99eb-33176ce9e227 2022700 0 2020-03-23 00:36:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 23 00:36:39.248: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7380 /api/v1/namespaces/watch-7380/configmaps/e2e-watch-test-label-changed ae5f3ba8-012a-4e62-99eb-33176ce9e227 2022753 0 2020-03-23 00:36:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:36:39.249: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7380 /api/v1/namespaces/watch-7380/configmaps/e2e-watch-test-label-changed ae5f3ba8-012a-4e62-99eb-33176ce9e227 2022754 0 2020-03-23 00:36:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 00:36:39.249: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7380 /api/v1/namespaces/watch-7380/configmaps/e2e-watch-test-label-changed ae5f3ba8-012a-4e62-99eb-33176ce9e227 2022755 0 2020-03-23 00:36:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:36:39.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7380" for this suite. • [SLOW TEST:10.204 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":202,"skipped":3477,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:36:39.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4732 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 23 00:36:39.327: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 23 00:36:39.393: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 23 00:36:41.403: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 23 00:36:43.398: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:36:45.398: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:36:47.398: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:36:49.398: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:36:51.412: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 00:36:53.398: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 23 00:36:53.403: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 23 00:36:55.407: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 23 00:36:57.408: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 23 00:36:59.408: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 23 00:37:01.408: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 23 00:37:03.408: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 23 00:37:07.436: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.154:8080/dial?request=hostname&protocol=udp&host=10.244.2.17&port=8081&tries=1'] Namespace:pod-network-test-4732 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 00:37:07.436: INFO: >>> kubeConfig: /root/.kube/config I0323 00:37:07.463670 7 log.go:172] (0xc0024609a0) (0xc001eeadc0) Create stream I0323 00:37:07.463707 7 log.go:172] (0xc0024609a0) (0xc001eeadc0) Stream added, broadcasting: 1 I0323 00:37:07.465316 7 log.go:172] (0xc0024609a0) Reply frame received for 1 I0323 00:37:07.465348 7 log.go:172] (0xc0024609a0) (0xc001eeaf00) Create stream I0323 00:37:07.465358 7 log.go:172] (0xc0024609a0) (0xc001eeaf00) Stream added, broadcasting: 3 I0323 00:37:07.466140 7 log.go:172] (0xc0024609a0) Reply frame received for 3 I0323 00:37:07.466164 7 log.go:172] (0xc0024609a0) (0xc0029aa500) Create stream I0323 00:37:07.466173 7 log.go:172] (0xc0024609a0) (0xc0029aa500) Stream added, broadcasting: 5 I0323 00:37:07.466896 7 log.go:172] (0xc0024609a0) Reply frame received for 5 I0323 00:37:07.527697 7 log.go:172] (0xc0024609a0) Data frame received for 3 I0323 00:37:07.527723 7 log.go:172] (0xc001eeaf00) (3) Data frame handling I0323 00:37:07.527748 7 log.go:172] (0xc001eeaf00) (3) Data frame sent I0323 00:37:07.528500 7 log.go:172] (0xc0024609a0) Data frame received for 3 I0323 00:37:07.528527 7 log.go:172] (0xc001eeaf00) (3) Data frame handling I0323 00:37:07.528575 7 log.go:172] (0xc0024609a0) Data frame received for 5 I0323 00:37:07.528608 7 log.go:172] (0xc0029aa500) (5) Data frame handling I0323 00:37:07.530379 7 log.go:172] (0xc0024609a0) Data frame received for 1 I0323 00:37:07.530409 7 log.go:172] (0xc001eeadc0) (1) Data frame handling I0323 00:37:07.530429 7 log.go:172] (0xc001eeadc0) (1) Data frame sent I0323 00:37:07.530451 7 log.go:172] (0xc0024609a0) (0xc001eeadc0) Stream removed, broadcasting: 1 I0323 00:37:07.530475 7 log.go:172] (0xc0024609a0) Go away received I0323 00:37:07.530559 7 log.go:172] (0xc0024609a0) (0xc001eeadc0) Stream removed, broadcasting: 1 I0323 00:37:07.530586 7 log.go:172] (0xc0024609a0) (0xc001eeaf00) Stream removed, broadcasting: 3 I0323 00:37:07.530598 7 log.go:172] (0xc0024609a0) (0xc0029aa500) Stream removed, broadcasting: 5 Mar 23 00:37:07.530: INFO: Waiting for responses: map[] Mar 23 00:37:07.534: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.154:8080/dial?request=hostname&protocol=udp&host=10.244.1.153&port=8081&tries=1'] Namespace:pod-network-test-4732 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 00:37:07.534: INFO: >>> kubeConfig: /root/.kube/config I0323 00:37:07.568371 7 log.go:172] (0xc0026944d0) (0xc002623ea0) Create stream I0323 00:37:07.568396 7 log.go:172] (0xc0026944d0) (0xc002623ea0) Stream added, broadcasting: 1 I0323 00:37:07.570056 7 log.go:172] (0xc0026944d0) Reply frame received for 1 I0323 00:37:07.570077 7 log.go:172] (0xc0026944d0) (0xc001eeafa0) Create stream I0323 00:37:07.570084 7 log.go:172] (0xc0026944d0) (0xc001eeafa0) Stream added, broadcasting: 3 I0323 00:37:07.571029 7 log.go:172] (0xc0026944d0) Reply frame received for 3 I0323 00:37:07.571060 7 log.go:172] (0xc0026944d0) (0xc002623f40) Create stream I0323 00:37:07.571069 7 log.go:172] (0xc0026944d0) (0xc002623f40) Stream added, broadcasting: 5 I0323 00:37:07.572120 7 log.go:172] (0xc0026944d0) Reply frame received for 5 I0323 00:37:07.647811 7 log.go:172] (0xc0026944d0) Data frame received for 3 I0323 00:37:07.647848 7 log.go:172] (0xc001eeafa0) (3) Data frame handling I0323 00:37:07.647874 7 log.go:172] (0xc001eeafa0) (3) Data frame sent I0323 00:37:07.648389 7 log.go:172] (0xc0026944d0) Data frame received for 3 I0323 00:37:07.648406 7 log.go:172] (0xc001eeafa0) (3) Data frame handling I0323 00:37:07.648423 7 log.go:172] (0xc0026944d0) Data frame received for 5 I0323 00:37:07.648433 7 log.go:172] (0xc002623f40) (5) Data frame handling I0323 00:37:07.650068 7 log.go:172] (0xc0026944d0) Data frame received for 1 I0323 00:37:07.650083 7 log.go:172] (0xc002623ea0) (1) Data frame handling I0323 00:37:07.650092 7 log.go:172] (0xc002623ea0) (1) Data frame sent I0323 00:37:07.650270 7 log.go:172] (0xc0026944d0) (0xc002623ea0) Stream removed, broadcasting: 1 I0323 00:37:07.650359 7 log.go:172] (0xc0026944d0) Go away received I0323 00:37:07.650453 7 log.go:172] (0xc0026944d0) (0xc002623ea0) Stream removed, broadcasting: 1 I0323 00:37:07.650489 7 log.go:172] (0xc0026944d0) (0xc001eeafa0) Stream removed, broadcasting: 3 I0323 00:37:07.650506 7 log.go:172] (0xc0026944d0) (0xc002623f40) Stream removed, broadcasting: 5 Mar 23 00:37:07.650: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:37:07.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4732" for this suite. • [SLOW TEST:28.405 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3477,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:37:07.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:37:38.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8927" for this suite. STEP: Destroying namespace "nsdeletetest-267" for this suite. Mar 23 00:37:38.928: INFO: Namespace nsdeletetest-267 was already deleted STEP: Destroying namespace "nsdeletetest-9661" for this suite. • [SLOW TEST:31.269 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":204,"skipped":3480,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:37:38.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0323 00:37:40.105729 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 23 00:37:40.105: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:37:40.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6794" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":205,"skipped":3482,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:37:40.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:37:40.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5161" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":206,"skipped":3482,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:37:40.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:37:40.365: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-ef1df5a3-e8b1-4e7c-bee1-97b3c98fa649" in namespace "security-context-test-9839" to be "Succeeded or Failed" Mar 23 00:37:40.380: INFO: Pod "busybox-privileged-false-ef1df5a3-e8b1-4e7c-bee1-97b3c98fa649": Phase="Pending", Reason="", readiness=false. Elapsed: 14.817106ms Mar 23 00:37:42.490: INFO: Pod "busybox-privileged-false-ef1df5a3-e8b1-4e7c-bee1-97b3c98fa649": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125663844s Mar 23 00:37:44.495: INFO: Pod "busybox-privileged-false-ef1df5a3-e8b1-4e7c-bee1-97b3c98fa649": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129789586s Mar 23 00:37:44.495: INFO: Pod "busybox-privileged-false-ef1df5a3-e8b1-4e7c-bee1-97b3c98fa649" satisfied condition "Succeeded or Failed" Mar 23 00:37:44.514: INFO: Got logs for pod "busybox-privileged-false-ef1df5a3-e8b1-4e7c-bee1-97b3c98fa649": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:37:44.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9839" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:37:44.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:37:44.568: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 23 00:37:46.789: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:37:46.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5329" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":208,"skipped":3530,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:37:46.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:37:46.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8264" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":209,"skipped":3540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:37:46.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Mar 23 00:37:47.044: INFO: Waiting up to 5m0s for pod "client-containers-62cd41d3-b567-4ada-902c-4e424e48acf7" in namespace "containers-2837" to be "Succeeded or Failed" Mar 23 00:37:47.048: INFO: Pod "client-containers-62cd41d3-b567-4ada-902c-4e424e48acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.727932ms Mar 23 00:37:49.263: INFO: Pod "client-containers-62cd41d3-b567-4ada-902c-4e424e48acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218299386s Mar 23 00:37:51.276: INFO: Pod "client-containers-62cd41d3-b567-4ada-902c-4e424e48acf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.23185648s STEP: Saw pod success Mar 23 00:37:51.276: INFO: Pod "client-containers-62cd41d3-b567-4ada-902c-4e424e48acf7" satisfied condition "Succeeded or Failed" Mar 23 00:37:51.280: INFO: Trying to get logs from node latest-worker pod client-containers-62cd41d3-b567-4ada-902c-4e424e48acf7 container test-container: STEP: delete the pod Mar 23 00:37:51.337: INFO: Waiting for pod client-containers-62cd41d3-b567-4ada-902c-4e424e48acf7 to disappear Mar 23 00:37:51.342: INFO: Pod client-containers-62cd41d3-b567-4ada-902c-4e424e48acf7 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:37:51.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2837" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3570,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:37:51.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:37:51.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7283' Mar 23 00:37:55.934: INFO: stderr: "" Mar 23 00:37:55.934: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 23 00:37:55.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7283' Mar 23 00:37:56.210: INFO: stderr: "" Mar 23 00:37:56.210: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 23 00:37:57.214: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:37:57.215: INFO: Found 0 / 1 Mar 23 00:37:58.214: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:37:58.214: INFO: Found 0 / 1 Mar 23 00:37:59.213: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:37:59.213: INFO: Found 1 / 1 Mar 23 00:37:59.213: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 23 00:37:59.219: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:37:59.219: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 23 00:37:59.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-bx6pt --namespace=kubectl-7283' Mar 23 00:37:59.337: INFO: stderr: "" Mar 23 00:37:59.337: INFO: stdout: "Name: agnhost-master-bx6pt\nNamespace: kubectl-7283\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Mon, 23 Mar 2020 00:37:56 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.158\nIPs:\n IP: 10.244.1.158\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://961dae27871b75780089e050f97baa459bf42ff537e6c1b1f7841d6d11125cef\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 23 Mar 2020 00:37:58 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-w2t49 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-w2t49:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-w2t49\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-7283/agnhost-master-bx6pt to latest-worker2\n Normal Pulled 2s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" Mar 23 00:37:59.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7283' Mar 23 00:37:59.446: INFO: stderr: "" Mar 23 00:37:59.446: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7283\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-bx6pt\n" Mar 23 00:37:59.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7283' Mar 23 00:37:59.547: INFO: stderr: "" Mar 23 00:37:59.547: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7283\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.238.234\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.158:6379\nSession Affinity: None\nEvents: \n" Mar 23 00:37:59.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Mar 23 00:37:59.664: INFO: stderr: "" Mar 23 00:37:59.664: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 23 Mar 2020 00:37:54 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 23 Mar 2020 00:37:41 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 23 Mar 2020 00:37:41 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 23 Mar 2020 00:37:41 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 23 Mar 2020 00:37:41 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 7d6h\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 7d6h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d6h\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 7d6h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 7d6h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 7d6h\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d6h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 7d6h\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d6h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Mar 23 00:37:59.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-7283' Mar 23 00:37:59.764: INFO: stderr: "" Mar 23 00:37:59.764: INFO: stdout: "Name: kubectl-7283\nLabels: e2e-framework=kubectl\n e2e-run=a489bb26-b82e-42d2-9897-d0a9e1f495cd\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:37:59.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7283" for this suite. • [SLOW TEST:8.424 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":211,"skipped":3576,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:37:59.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-d0464ed2-02c8-4f32-85fc-e3f70fd01873 STEP: Creating configMap with name cm-test-opt-upd-272c25ca-143c-4dca-acb3-b4ff51ed3a7f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d0464ed2-02c8-4f32-85fc-e3f70fd01873 STEP: Updating configmap cm-test-opt-upd-272c25ca-143c-4dca-acb3-b4ff51ed3a7f STEP: Creating configMap with name cm-test-opt-create-6a71c99e-5a1d-4f04-8846-990b458d74c2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:38:09.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4137" for this suite. • [SLOW TEST:10.212 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3593,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:38:09.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 23 00:38:10.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8540' Mar 23 00:38:10.310: INFO: stderr: "" Mar 23 00:38:10.310: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 23 00:38:11.314: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:38:11.314: INFO: Found 0 / 1 Mar 23 00:38:12.994: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:38:12.994: INFO: Found 0 / 1 Mar 23 00:38:13.314: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:38:13.314: INFO: Found 0 / 1 Mar 23 00:38:14.315: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:38:14.315: INFO: Found 1 / 1 Mar 23 00:38:14.315: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 23 00:38:14.318: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:38:14.318: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 23 00:38:14.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-tc6nr --namespace=kubectl-8540 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 23 00:38:14.422: INFO: stderr: "" Mar 23 00:38:14.422: INFO: stdout: "pod/agnhost-master-tc6nr patched\n" STEP: checking annotations Mar 23 00:38:14.433: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 00:38:14.433: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:38:14.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8540" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":213,"skipped":3595,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:38:14.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 23 00:38:14.470: INFO: >>> kubeConfig: /root/.kube/config Mar 23 00:38:17.395: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:38:27.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6806" for this suite. • [SLOW TEST:13.545 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":214,"skipped":3674,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:38:27.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Mar 23 00:38:28.082: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Mar 23 00:38:28.091: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 23 00:38:28.091: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Mar 23 00:38:28.098: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 23 00:38:28.098: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Mar 23 00:38:28.146: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Mar 23 00:38:28.146: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Mar 23 00:38:35.507: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:38:35.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-708" for this suite. • [SLOW TEST:7.543 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":215,"skipped":3682,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:38:35.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 23 00:38:36.005: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 23 00:38:38.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520716, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520716, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520716, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520715, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:38:41.069: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:38:41.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:38:42.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2992" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.443 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":216,"skipped":3700,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:38:42.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 00:38:43.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ddd3403-c4ba-4931-886a-33bc25696353" in namespace "downward-api-4506" to be "Succeeded or Failed" Mar 23 00:38:43.092: INFO: Pod "downwardapi-volume-0ddd3403-c4ba-4931-886a-33bc25696353": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013107ms Mar 23 00:38:45.096: INFO: Pod "downwardapi-volume-0ddd3403-c4ba-4931-886a-33bc25696353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007819948s Mar 23 00:38:47.100: INFO: Pod "downwardapi-volume-0ddd3403-c4ba-4931-886a-33bc25696353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012269701s STEP: Saw pod success Mar 23 00:38:47.100: INFO: Pod "downwardapi-volume-0ddd3403-c4ba-4931-886a-33bc25696353" satisfied condition "Succeeded or Failed" Mar 23 00:38:47.103: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0ddd3403-c4ba-4931-886a-33bc25696353 container client-container: STEP: delete the pod Mar 23 00:38:47.121: INFO: Waiting for pod downwardapi-volume-0ddd3403-c4ba-4931-886a-33bc25696353 to disappear Mar 23 00:38:47.124: INFO: Pod downwardapi-volume-0ddd3403-c4ba-4931-886a-33bc25696353 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:38:47.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4506" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3720,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:38:47.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-00713fc5-c990-4a16-b20e-00ae206a0744 STEP: Creating a pod to test consume secrets Mar 23 00:38:47.200: INFO: Waiting up to 5m0s for pod "pod-secrets-88d8f804-ee27-4f97-9bfb-1429878e5f55" in namespace "secrets-646" to be "Succeeded or Failed" Mar 23 00:38:47.218: INFO: Pod "pod-secrets-88d8f804-ee27-4f97-9bfb-1429878e5f55": Phase="Pending", Reason="", readiness=false. Elapsed: 18.31154ms Mar 23 00:38:49.221: INFO: Pod "pod-secrets-88d8f804-ee27-4f97-9bfb-1429878e5f55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021699148s Mar 23 00:38:51.225: INFO: Pod "pod-secrets-88d8f804-ee27-4f97-9bfb-1429878e5f55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025511318s STEP: Saw pod success Mar 23 00:38:51.225: INFO: Pod "pod-secrets-88d8f804-ee27-4f97-9bfb-1429878e5f55" satisfied condition "Succeeded or Failed" Mar 23 00:38:51.227: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-88d8f804-ee27-4f97-9bfb-1429878e5f55 container secret-volume-test: STEP: delete the pod Mar 23 00:38:51.323: INFO: Waiting for pod pod-secrets-88d8f804-ee27-4f97-9bfb-1429878e5f55 to disappear Mar 23 00:38:51.358: INFO: Pod pod-secrets-88d8f804-ee27-4f97-9bfb-1429878e5f55 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:38:51.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-646" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3734,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:38:51.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 23 00:38:51.443: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 23 00:38:51.463: INFO: Waiting for terminating namespaces to be deleted... Mar 23 00:38:51.465: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 23 00:38:51.471: INFO: pod-no-resources from limitrange-708 started at 2020-03-23 00:38:28 +0000 UTC (1 container statuses recorded) Mar 23 00:38:51.471: INFO: Container pause ready: false, restart count 0 Mar 23 00:38:51.471: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:38:51.471: INFO: Container kindnet-cni ready: true, restart count 0 Mar 23 00:38:51.471: INFO: pfpod2 from limitrange-708 started at 2020-03-23 00:38:35 +0000 UTC (1 container statuses recorded) Mar 23 00:38:51.471: INFO: Container pause ready: false, restart count 0 Mar 23 00:38:51.471: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:38:51.471: INFO: Container kube-proxy ready: true, restart count 0 Mar 23 00:38:51.471: INFO: pfpod from limitrange-708 started at 2020-03-23 00:38:30 +0000 UTC (1 container statuses recorded) Mar 23 00:38:51.471: INFO: Container pause ready: false, restart count 0 Mar 23 00:38:51.471: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 23 00:38:51.476: INFO: pod-partial-resources from limitrange-708 started at 2020-03-23 00:38:28 +0000 UTC (1 container statuses recorded) Mar 23 00:38:51.476: INFO: Container pause ready: false, restart count 0 Mar 23 00:38:51.476: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:38:51.476: INFO: Container kube-proxy ready: true, restart count 0 Mar 23 00:38:51.476: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:38:51.476: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 23 00:38:51.575: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Mar 23 00:38:51.575: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Mar 23 00:38:51.575: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Mar 23 00:38:51.575: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker Mar 23 00:38:51.575: INFO: Pod pfpod requesting resource cpu=10m on Node latest-worker Mar 23 00:38:51.575: INFO: Pod pfpod2 requesting resource cpu=600m on Node latest-worker Mar 23 00:38:51.575: INFO: Pod pod-no-resources requesting resource cpu=100m on Node latest-worker Mar 23 00:38:51.575: INFO: Pod pod-partial-resources requesting resource cpu=300m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 23 00:38:51.575: INFO: Creating a pod which consumes cpu=10633m on Node latest-worker Mar 23 00:38:51.582: INFO: Creating a pod which consumes cpu=10920m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-13cbe46e-3df5-4168-828f-000aa3040c6c.15fec7eb2455fd51], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4356/filler-pod-13cbe46e-3df5-4168-828f-000aa3040c6c to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-13cbe46e-3df5-4168-828f-000aa3040c6c.15fec7eb96c2dee5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-13cbe46e-3df5-4168-828f-000aa3040c6c.15fec7ebc5606906], Reason = [Created], Message = [Created container filler-pod-13cbe46e-3df5-4168-828f-000aa3040c6c] STEP: Considering event: Type = [Normal], Name = [filler-pod-13cbe46e-3df5-4168-828f-000aa3040c6c.15fec7ebd4a3cf0d], Reason = [Started], Message = [Started container filler-pod-13cbe46e-3df5-4168-828f-000aa3040c6c] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d1c659c-983b-4dc9-9c4d-536f6d9b0ea8.15fec7eb2272bf90], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4356/filler-pod-5d1c659c-983b-4dc9-9c4d-536f6d9b0ea8 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d1c659c-983b-4dc9-9c4d-536f6d9b0ea8.15fec7eb6b64748a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d1c659c-983b-4dc9-9c4d-536f6d9b0ea8.15fec7ebb631f624], Reason = [Created], Message = [Created container filler-pod-5d1c659c-983b-4dc9-9c4d-536f6d9b0ea8] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d1c659c-983b-4dc9-9c4d-536f6d9b0ea8.15fec7ebc5facc22], Reason = [Started], Message = [Started container filler-pod-5d1c659c-983b-4dc9-9c4d-536f6d9b0ea8] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fec7ec13d48704], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:38:56.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4356" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.369 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":219,"skipped":3756,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:38:56.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:38:56.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Mar 23 00:38:56.934: INFO: stderr: "" Mar 23 00:38:56.934: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:38:56.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1204" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":220,"skipped":3769,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:38:56.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:38:57.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5702" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":221,"skipped":3778,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:38:57.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:38:57.691: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:38:59.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520737, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520737, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520737, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520737, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 00:39:01.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520737, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520737, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520737, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520737, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:39:04.736: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:39:04.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2938" for this suite. STEP: Destroying namespace "webhook-2938-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.817 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":222,"skipped":3785,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:39:04.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 23 00:39:04.939: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 23 00:39:04.955: INFO: Waiting for terminating namespaces to be deleted... Mar 23 00:39:04.958: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 23 00:39:04.962: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:39:04.962: INFO: Container kindnet-cni ready: true, restart count 0 Mar 23 00:39:04.962: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:39:04.962: INFO: Container kube-proxy ready: true, restart count 0 Mar 23 00:39:04.962: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 23 00:39:04.966: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:39:04.966: INFO: Container kindnet-cni ready: true, restart count 0 Mar 23 00:39:04.966: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:39:04.966: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a94e47e6-0003-4e32-b3d7-6dd661b98d25 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-a94e47e6-0003-4e32-b3d7-6dd661b98d25 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a94e47e6-0003-4e32-b3d7-6dd661b98d25 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:39:13.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2704" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.480 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":223,"skipped":3788,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:39:13.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 23 00:39:13.415: INFO: Waiting up to 5m0s for pod "pod-be903daa-c741-440b-b776-e098ecf62b36" in namespace "emptydir-3285" to be "Succeeded or Failed" Mar 23 00:39:13.418: INFO: Pod "pod-be903daa-c741-440b-b776-e098ecf62b36": Phase="Pending", Reason="", readiness=false. Elapsed: 3.218422ms Mar 23 00:39:15.422: INFO: Pod "pod-be903daa-c741-440b-b776-e098ecf62b36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007021787s Mar 23 00:39:17.426: INFO: Pod "pod-be903daa-c741-440b-b776-e098ecf62b36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010844827s STEP: Saw pod success Mar 23 00:39:17.426: INFO: Pod "pod-be903daa-c741-440b-b776-e098ecf62b36" satisfied condition "Succeeded or Failed" Mar 23 00:39:17.428: INFO: Trying to get logs from node latest-worker2 pod pod-be903daa-c741-440b-b776-e098ecf62b36 container test-container: STEP: delete the pod Mar 23 00:39:17.456: INFO: Waiting for pod pod-be903daa-c741-440b-b776-e098ecf62b36 to disappear Mar 23 00:39:17.473: INFO: Pod pod-be903daa-c741-440b-b776-e098ecf62b36 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:39:17.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3285" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3828,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:39:17.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:39:18.115: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:39:20.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520758, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520758, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520758, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520758, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:39:23.164: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 23 00:39:27.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-5354 to-be-attached-pod -i -c=container1' Mar 23 00:39:27.346: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:39:27.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5354" for this suite. STEP: Destroying namespace "webhook-5354-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.974 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":225,"skipped":3848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:39:27.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:39:27.516: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 23 00:39:30.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-282 create -f -' Mar 23 00:39:35.303: INFO: stderr: "" Mar 23 00:39:35.304: INFO: stdout: "e2e-test-crd-publish-openapi-7477-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 23 00:39:35.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-282 delete e2e-test-crd-publish-openapi-7477-crds test-cr' Mar 23 00:39:35.470: INFO: stderr: "" Mar 23 00:39:35.470: INFO: stdout: "e2e-test-crd-publish-openapi-7477-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 23 00:39:35.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-282 apply -f -' Mar 23 00:39:35.707: INFO: stderr: "" Mar 23 00:39:35.707: INFO: stdout: "e2e-test-crd-publish-openapi-7477-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 23 00:39:35.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-282 delete e2e-test-crd-publish-openapi-7477-crds test-cr' Mar 23 00:39:35.792: INFO: stderr: "" Mar 23 00:39:35.793: INFO: stdout: "e2e-test-crd-publish-openapi-7477-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 23 00:39:35.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7477-crds' Mar 23 00:39:36.043: INFO: stderr: "" Mar 23 00:39:36.043: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7477-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:39:38.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-282" for this suite. • [SLOW TEST:11.472 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":226,"skipped":3881,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:39:38.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-367a6bc6-ea52-4fbf-85ac-507062439286 STEP: Creating secret with name secret-projected-all-test-volume-d16c4206-7bed-493f-a04c-f89766d14266 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 23 00:39:39.199: INFO: Waiting up to 5m0s for pod "projected-volume-bd28e832-13a7-41bb-959e-9feb3164c37e" in namespace "projected-2958" to be "Succeeded or Failed" Mar 23 00:39:39.403: INFO: Pod "projected-volume-bd28e832-13a7-41bb-959e-9feb3164c37e": Phase="Pending", Reason="", readiness=false. Elapsed: 203.402781ms Mar 23 00:39:41.408: INFO: Pod "projected-volume-bd28e832-13a7-41bb-959e-9feb3164c37e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208957247s Mar 23 00:39:43.412: INFO: Pod "projected-volume-bd28e832-13a7-41bb-959e-9feb3164c37e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.213028681s STEP: Saw pod success Mar 23 00:39:43.412: INFO: Pod "projected-volume-bd28e832-13a7-41bb-959e-9feb3164c37e" satisfied condition "Succeeded or Failed" Mar 23 00:39:43.416: INFO: Trying to get logs from node latest-worker2 pod projected-volume-bd28e832-13a7-41bb-959e-9feb3164c37e container projected-all-volume-test: STEP: delete the pod Mar 23 00:39:43.458: INFO: Waiting for pod projected-volume-bd28e832-13a7-41bb-959e-9feb3164c37e to disappear Mar 23 00:39:43.466: INFO: Pod projected-volume-bd28e832-13a7-41bb-959e-9feb3164c37e no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:39:43.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2958" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3895,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:39:43.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 23 00:39:51.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 23 00:39:51.630: INFO: Pod pod-with-prestop-exec-hook still exists Mar 23 00:39:53.630: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 23 00:39:53.634: INFO: Pod pod-with-prestop-exec-hook still exists Mar 23 00:39:55.630: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 23 00:39:55.635: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:39:55.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2755" for this suite. • [SLOW TEST:12.175 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:39:55.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-64180f4a-1294-47a8-b4a6-e365650a6d00 STEP: Creating a pod to test consume configMaps Mar 23 00:39:55.782: INFO: Waiting up to 5m0s for pod "pod-configmaps-462ef1d6-dada-4dd5-b187-4d7240e6ecfc" in namespace "configmap-5003" to be "Succeeded or Failed" Mar 23 00:39:55.785: INFO: Pod "pod-configmaps-462ef1d6-dada-4dd5-b187-4d7240e6ecfc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.511597ms Mar 23 00:39:57.789: INFO: Pod "pod-configmaps-462ef1d6-dada-4dd5-b187-4d7240e6ecfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007641036s Mar 23 00:39:59.794: INFO: Pod "pod-configmaps-462ef1d6-dada-4dd5-b187-4d7240e6ecfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012130045s STEP: Saw pod success Mar 23 00:39:59.794: INFO: Pod "pod-configmaps-462ef1d6-dada-4dd5-b187-4d7240e6ecfc" satisfied condition "Succeeded or Failed" Mar 23 00:39:59.797: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-462ef1d6-dada-4dd5-b187-4d7240e6ecfc container configmap-volume-test: STEP: delete the pod Mar 23 00:39:59.816: INFO: Waiting for pod pod-configmaps-462ef1d6-dada-4dd5-b187-4d7240e6ecfc to disappear Mar 23 00:39:59.821: INFO: Pod pod-configmaps-462ef1d6-dada-4dd5-b187-4d7240e6ecfc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:39:59.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5003" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3932,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:39:59.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Mar 23 00:40:04.448: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4247 pod-service-account-4be89669-1d52-4104-b960-7a2588c9b884 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 23 00:40:04.673: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4247 pod-service-account-4be89669-1d52-4104-b960-7a2588c9b884 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 23 00:40:04.880: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4247 pod-service-account-4be89669-1d52-4104-b960-7a2588c9b884 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:40:05.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4247" for this suite. • [SLOW TEST:5.282 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":230,"skipped":3936,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:40:05.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:40:05.206: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 23 00:40:05.231: INFO: Number of nodes with available pods: 0 Mar 23 00:40:05.231: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 23 00:40:05.309: INFO: Number of nodes with available pods: 0 Mar 23 00:40:05.309: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:06.313: INFO: Number of nodes with available pods: 0 Mar 23 00:40:06.313: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:07.313: INFO: Number of nodes with available pods: 0 Mar 23 00:40:07.313: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:08.313: INFO: Number of nodes with available pods: 1 Mar 23 00:40:08.313: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 23 00:40:08.340: INFO: Number of nodes with available pods: 1 Mar 23 00:40:08.340: INFO: Number of running nodes: 0, number of available pods: 1 Mar 23 00:40:09.344: INFO: Number of nodes with available pods: 0 Mar 23 00:40:09.344: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 23 00:40:09.396: INFO: Number of nodes with available pods: 0 Mar 23 00:40:09.396: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:10.426: INFO: Number of nodes with available pods: 0 Mar 23 00:40:10.426: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:11.401: INFO: Number of nodes with available pods: 0 Mar 23 00:40:11.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:12.403: INFO: Number of nodes with available pods: 0 Mar 23 00:40:12.403: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:13.400: INFO: Number of nodes with available pods: 0 Mar 23 00:40:13.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:14.401: INFO: Number of nodes with available pods: 0 Mar 23 00:40:14.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:15.400: INFO: Number of nodes with available pods: 0 Mar 23 00:40:15.400: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:16.401: INFO: Number of nodes with available pods: 0 Mar 23 00:40:16.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:17.401: INFO: Number of nodes with available pods: 0 Mar 23 00:40:17.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:18.401: INFO: Number of nodes with available pods: 0 Mar 23 00:40:18.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:19.401: INFO: Number of nodes with available pods: 0 Mar 23 00:40:19.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:20.401: INFO: Number of nodes with available pods: 0 Mar 23 00:40:20.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:21.401: INFO: Number of nodes with available pods: 0 Mar 23 00:40:21.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:22.401: INFO: Number of nodes with available pods: 0 Mar 23 00:40:22.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:23.401: INFO: Number of nodes with available pods: 0 Mar 23 00:40:23.401: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:24.400: INFO: Number of nodes with available pods: 0 Mar 23 00:40:24.400: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:40:25.401: INFO: Number of nodes with available pods: 1 Mar 23 00:40:25.401: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9794, will wait for the garbage collector to delete the pods Mar 23 00:40:25.465: INFO: Deleting DaemonSet.extensions daemon-set took: 5.674873ms Mar 23 00:40:25.765: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.298683ms Mar 23 00:40:29.269: INFO: Number of nodes with available pods: 0 Mar 23 00:40:29.269: INFO: Number of running nodes: 0, number of available pods: 0 Mar 23 00:40:29.272: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9794/daemonsets","resourceVersion":"2024520"},"items":null} Mar 23 00:40:29.275: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9794/pods","resourceVersion":"2024520"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:40:29.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9794" for this suite. • [SLOW TEST:24.206 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":231,"skipped":3944,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:40:29.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-daa14250-2321-46b3-9894-04adad95de46 STEP: Creating a pod to test consume secrets Mar 23 00:40:29.397: INFO: Waiting up to 5m0s for pod "pod-secrets-143a96e4-d79f-4ff0-b381-37d81062c276" in namespace "secrets-860" to be "Succeeded or Failed" Mar 23 00:40:29.431: INFO: Pod "pod-secrets-143a96e4-d79f-4ff0-b381-37d81062c276": Phase="Pending", Reason="", readiness=false. Elapsed: 34.165617ms Mar 23 00:40:31.434: INFO: Pod "pod-secrets-143a96e4-d79f-4ff0-b381-37d81062c276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03728273s Mar 23 00:40:33.439: INFO: Pod "pod-secrets-143a96e4-d79f-4ff0-b381-37d81062c276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041513546s STEP: Saw pod success Mar 23 00:40:33.439: INFO: Pod "pod-secrets-143a96e4-d79f-4ff0-b381-37d81062c276" satisfied condition "Succeeded or Failed" Mar 23 00:40:33.442: INFO: Trying to get logs from node latest-worker pod pod-secrets-143a96e4-d79f-4ff0-b381-37d81062c276 container secret-volume-test: STEP: delete the pod Mar 23 00:40:33.463: INFO: Waiting for pod pod-secrets-143a96e4-d79f-4ff0-b381-37d81062c276 to disappear Mar 23 00:40:33.467: INFO: Pod pod-secrets-143a96e4-d79f-4ff0-b381-37d81062c276 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:40:33.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-860" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3961,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:40:33.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5673 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 23 00:40:33.569: INFO: Found 0 stateful pods, waiting for 3 Mar 23 00:40:43.583: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:40:43.583: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:40:43.583: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 23 00:40:43.610: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 23 00:40:53.714: INFO: Updating stateful set ss2 Mar 23 00:40:53.727: INFO: Waiting for Pod statefulset-5673/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 23 00:41:03.847: INFO: Found 2 stateful pods, waiting for 3 Mar 23 00:41:13.853: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:41:13.853: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:41:13.853: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 23 00:41:13.875: INFO: Updating stateful set ss2 Mar 23 00:41:13.901: INFO: Waiting for Pod statefulset-5673/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 23 00:41:23.997: INFO: Updating stateful set ss2 Mar 23 00:41:24.024: INFO: Waiting for StatefulSet statefulset-5673/ss2 to complete update Mar 23 00:41:24.024: INFO: Waiting for Pod statefulset-5673/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 23 00:41:34.030: INFO: Waiting for StatefulSet statefulset-5673/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 23 00:41:44.032: INFO: Deleting all statefulset in ns statefulset-5673 Mar 23 00:41:44.035: INFO: Scaling statefulset ss2 to 0 Mar 23 00:42:04.055: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 00:42:04.058: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:42:04.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5673" for this suite. • [SLOW TEST:90.604 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":233,"skipped":3979,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:42:04.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-d8606839-beb1-48f7-bb1d-eb197770c25f STEP: Creating a pod to test consume configMaps Mar 23 00:42:04.137: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c2afdf96-780e-40ca-bc68-57f26249f956" in namespace "projected-9620" to be "Succeeded or Failed" Mar 23 00:42:04.141: INFO: Pod "pod-projected-configmaps-c2afdf96-780e-40ca-bc68-57f26249f956": Phase="Pending", Reason="", readiness=false. Elapsed: 3.90128ms Mar 23 00:42:06.145: INFO: Pod "pod-projected-configmaps-c2afdf96-780e-40ca-bc68-57f26249f956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008129947s Mar 23 00:42:08.150: INFO: Pod "pod-projected-configmaps-c2afdf96-780e-40ca-bc68-57f26249f956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013094164s STEP: Saw pod success Mar 23 00:42:08.150: INFO: Pod "pod-projected-configmaps-c2afdf96-780e-40ca-bc68-57f26249f956" satisfied condition "Succeeded or Failed" Mar 23 00:42:08.153: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c2afdf96-780e-40ca-bc68-57f26249f956 container projected-configmap-volume-test: STEP: delete the pod Mar 23 00:42:08.200: INFO: Waiting for pod pod-projected-configmaps-c2afdf96-780e-40ca-bc68-57f26249f956 to disappear Mar 23 00:42:08.212: INFO: Pod pod-projected-configmaps-c2afdf96-780e-40ca-bc68-57f26249f956 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:42:08.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9620" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3981,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:42:08.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 23 00:42:08.260: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 23 00:42:08.291: INFO: Waiting for terminating namespaces to be deleted... Mar 23 00:42:08.293: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 23 00:42:08.300: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:42:08.300: INFO: Container kindnet-cni ready: true, restart count 0 Mar 23 00:42:08.300: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:42:08.300: INFO: Container kube-proxy ready: true, restart count 0 Mar 23 00:42:08.300: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 23 00:42:08.320: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:42:08.320: INFO: Container kindnet-cni ready: true, restart count 0 Mar 23 00:42:08.320: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 00:42:08.320: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6f0fd541-2b53-45e9-a2dd-a0c77fcd7fea 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-6f0fd541-2b53-45e9-a2dd-a0c77fcd7fea off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-6f0fd541-2b53-45e9-a2dd-a0c77fcd7fea [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:42:24.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6236" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.319 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":235,"skipped":3991,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:42:24.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 23 00:42:24.592: INFO: >>> kubeConfig: /root/.kube/config Mar 23 00:42:27.480: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:42:37.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4428" for this suite. • [SLOW TEST:13.220 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":236,"skipped":3995,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:42:37.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:42:41.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7165" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4008,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:42:41.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:42:42.262: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:42:44.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520962, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520962, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520962, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720520962, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:42:47.303: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:42:59.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8935" for this suite. STEP: Destroying namespace "webhook-8935-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.690 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":238,"skipped":4053,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:42:59.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:42:59.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5604" for this suite. STEP: Destroying namespace "nspatchtest-2db9fa74-439b-4c74-bcdc-97d38934145d-3215" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":239,"skipped":4083,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:42:59.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 23 00:43:06.293: INFO: 0 pods remaining Mar 23 00:43:06.293: INFO: 0 pods has nil DeletionTimestamp Mar 23 00:43:06.293: INFO: STEP: Gathering metrics W0323 00:43:07.430460 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 23 00:43:07.430: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:43:07.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7936" for this suite. • [SLOW TEST:8.151 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":240,"skipped":4108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:43:07.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0323 00:43:50.054926 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 23 00:43:50.054: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:43:50.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3188" for this suite. • [SLOW TEST:42.227 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":241,"skipped":4143,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:43:50.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:44:08.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5431" for this suite. • [SLOW TEST:18.130 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":242,"skipped":4150,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:44:08.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-af36d965-3a3d-46c6-a53f-f81f032f40a9 STEP: Creating a pod to test consume configMaps Mar 23 00:44:08.295: INFO: Waiting up to 5m0s for pod "pod-configmaps-8ac4d891-5c00-45cc-b38a-3ef123193317" in namespace "configmap-1757" to be "Succeeded or Failed" Mar 23 00:44:08.312: INFO: Pod "pod-configmaps-8ac4d891-5c00-45cc-b38a-3ef123193317": Phase="Pending", Reason="", readiness=false. Elapsed: 16.308383ms Mar 23 00:44:10.320: INFO: Pod "pod-configmaps-8ac4d891-5c00-45cc-b38a-3ef123193317": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024329569s Mar 23 00:44:12.324: INFO: Pod "pod-configmaps-8ac4d891-5c00-45cc-b38a-3ef123193317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028219259s STEP: Saw pod success Mar 23 00:44:12.324: INFO: Pod "pod-configmaps-8ac4d891-5c00-45cc-b38a-3ef123193317" satisfied condition "Succeeded or Failed" Mar 23 00:44:12.327: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8ac4d891-5c00-45cc-b38a-3ef123193317 container configmap-volume-test: STEP: delete the pod Mar 23 00:44:12.378: INFO: Waiting for pod pod-configmaps-8ac4d891-5c00-45cc-b38a-3ef123193317 to disappear Mar 23 00:44:12.398: INFO: Pod pod-configmaps-8ac4d891-5c00-45cc-b38a-3ef123193317 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:44:12.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1757" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:44:12.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9909 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-9909 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9909 Mar 23 00:44:12.467: INFO: Found 0 stateful pods, waiting for 1 Mar 23 00:44:22.472: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 23 00:44:22.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9909 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 00:44:22.719: INFO: stderr: "I0323 00:44:22.600265 2662 log.go:172] (0xc000abd3f0) (0xc000a406e0) Create stream\nI0323 00:44:22.600801 2662 log.go:172] (0xc000abd3f0) (0xc000a406e0) Stream added, broadcasting: 1\nI0323 00:44:22.605219 2662 log.go:172] (0xc000abd3f0) Reply frame received for 1\nI0323 00:44:22.605264 2662 log.go:172] (0xc000abd3f0) (0xc0006b3720) Create stream\nI0323 00:44:22.605277 2662 log.go:172] (0xc000abd3f0) (0xc0006b3720) Stream added, broadcasting: 3\nI0323 00:44:22.606201 2662 log.go:172] (0xc000abd3f0) Reply frame received for 3\nI0323 00:44:22.606238 2662 log.go:172] (0xc000abd3f0) (0xc000534b40) Create stream\nI0323 00:44:22.606250 2662 log.go:172] (0xc000abd3f0) (0xc000534b40) Stream added, broadcasting: 5\nI0323 00:44:22.607033 2662 log.go:172] (0xc000abd3f0) Reply frame received for 5\nI0323 00:44:22.680095 2662 log.go:172] (0xc000abd3f0) Data frame received for 5\nI0323 00:44:22.680125 2662 log.go:172] (0xc000534b40) (5) Data frame handling\nI0323 00:44:22.680145 2662 log.go:172] (0xc000534b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 00:44:22.711540 2662 log.go:172] (0xc000abd3f0) Data frame received for 3\nI0323 00:44:22.711580 2662 log.go:172] (0xc0006b3720) (3) Data frame handling\nI0323 00:44:22.711611 2662 log.go:172] (0xc0006b3720) (3) Data frame sent\nI0323 00:44:22.711883 2662 log.go:172] (0xc000abd3f0) Data frame received for 5\nI0323 00:44:22.711902 2662 log.go:172] (0xc000534b40) (5) Data frame handling\nI0323 00:44:22.712492 2662 log.go:172] (0xc000abd3f0) Data frame received for 3\nI0323 00:44:22.712514 2662 log.go:172] (0xc0006b3720) (3) Data frame handling\nI0323 00:44:22.714414 2662 log.go:172] (0xc000abd3f0) Data frame received for 1\nI0323 00:44:22.714436 2662 log.go:172] (0xc000a406e0) (1) Data frame handling\nI0323 00:44:22.714449 2662 log.go:172] (0xc000a406e0) (1) Data frame sent\nI0323 00:44:22.714460 2662 log.go:172] (0xc000abd3f0) (0xc000a406e0) Stream removed, broadcasting: 1\nI0323 00:44:22.714534 2662 log.go:172] (0xc000abd3f0) Go away received\nI0323 00:44:22.714766 2662 log.go:172] (0xc000abd3f0) (0xc000a406e0) Stream removed, broadcasting: 1\nI0323 00:44:22.714787 2662 log.go:172] (0xc000abd3f0) (0xc0006b3720) Stream removed, broadcasting: 3\nI0323 00:44:22.714794 2662 log.go:172] (0xc000abd3f0) (0xc000534b40) Stream removed, broadcasting: 5\n" Mar 23 00:44:22.719: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 00:44:22.719: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 00:44:22.742: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 23 00:44:32.768: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 23 00:44:32.768: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 00:44:32.810: INFO: POD NODE PHASE GRACE CONDITIONS Mar 23 00:44:32.810: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:12 +0000 UTC }] Mar 23 00:44:32.810: INFO: Mar 23 00:44:32.810: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 23 00:44:33.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.966540918s Mar 23 00:44:34.916: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961875992s Mar 23 00:44:35.933: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.861472787s Mar 23 00:44:36.938: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.843793165s Mar 23 00:44:37.942: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.839141168s Mar 23 00:44:38.948: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.834729577s Mar 23 00:44:39.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.829454867s Mar 23 00:44:40.958: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.824262717s Mar 23 00:44:41.987: INFO: Verifying statefulset ss doesn't scale past 3 for another 819.180198ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9909 Mar 23 00:44:42.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9909 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 00:44:43.181: INFO: stderr: "I0323 00:44:43.115411 2683 log.go:172] (0xc00094c000) (0xc000829400) Create stream\nI0323 00:44:43.115487 2683 log.go:172] (0xc00094c000) (0xc000829400) Stream added, broadcasting: 1\nI0323 00:44:43.117599 2683 log.go:172] (0xc00094c000) Reply frame received for 1\nI0323 00:44:43.117639 2683 log.go:172] (0xc00094c000) (0xc000af6000) Create stream\nI0323 00:44:43.117651 2683 log.go:172] (0xc00094c000) (0xc000af6000) Stream added, broadcasting: 3\nI0323 00:44:43.118602 2683 log.go:172] (0xc00094c000) Reply frame received for 3\nI0323 00:44:43.118639 2683 log.go:172] (0xc00094c000) (0xc000af60a0) Create stream\nI0323 00:44:43.118651 2683 log.go:172] (0xc00094c000) (0xc000af60a0) Stream added, broadcasting: 5\nI0323 00:44:43.119638 2683 log.go:172] (0xc00094c000) Reply frame received for 5\nI0323 00:44:43.175771 2683 log.go:172] (0xc00094c000) Data frame received for 3\nI0323 00:44:43.175803 2683 log.go:172] (0xc000af6000) (3) Data frame handling\nI0323 00:44:43.175825 2683 log.go:172] (0xc000af6000) (3) Data frame sent\nI0323 00:44:43.175837 2683 log.go:172] (0xc00094c000) Data frame received for 3\nI0323 00:44:43.175847 2683 log.go:172] (0xc000af6000) (3) Data frame handling\nI0323 00:44:43.175903 2683 log.go:172] (0xc00094c000) Data frame received for 5\nI0323 00:44:43.175931 2683 log.go:172] (0xc000af60a0) (5) Data frame handling\nI0323 00:44:43.175950 2683 log.go:172] (0xc000af60a0) (5) Data frame sent\nI0323 00:44:43.175960 2683 log.go:172] (0xc00094c000) Data frame received for 5\nI0323 00:44:43.175968 2683 log.go:172] (0xc000af60a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 00:44:43.177474 2683 log.go:172] (0xc00094c000) Data frame received for 1\nI0323 00:44:43.177498 2683 log.go:172] (0xc000829400) (1) Data frame handling\nI0323 00:44:43.177519 2683 log.go:172] (0xc000829400) (1) Data frame sent\nI0323 00:44:43.177551 2683 log.go:172] (0xc00094c000) (0xc000829400) Stream removed, broadcasting: 1\nI0323 00:44:43.177581 2683 log.go:172] (0xc00094c000) Go away received\nI0323 00:44:43.177895 2683 log.go:172] (0xc00094c000) (0xc000829400) Stream removed, broadcasting: 1\nI0323 00:44:43.177911 2683 log.go:172] (0xc00094c000) (0xc000af6000) Stream removed, broadcasting: 3\nI0323 00:44:43.177919 2683 log.go:172] (0xc00094c000) (0xc000af60a0) Stream removed, broadcasting: 5\n" Mar 23 00:44:43.181: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 00:44:43.181: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 00:44:43.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9909 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 00:44:43.404: INFO: stderr: "I0323 00:44:43.324066 2703 log.go:172] (0xc000bba6e0) (0xc0008e8000) Create stream\nI0323 00:44:43.324134 2703 log.go:172] (0xc000bba6e0) (0xc0008e8000) Stream added, broadcasting: 1\nI0323 00:44:43.327106 2703 log.go:172] (0xc000bba6e0) Reply frame received for 1\nI0323 00:44:43.327160 2703 log.go:172] (0xc000bba6e0) (0xc0007fd2c0) Create stream\nI0323 00:44:43.327177 2703 log.go:172] (0xc000bba6e0) (0xc0007fd2c0) Stream added, broadcasting: 3\nI0323 00:44:43.328191 2703 log.go:172] (0xc000bba6e0) Reply frame received for 3\nI0323 00:44:43.328226 2703 log.go:172] (0xc000bba6e0) (0xc0007fd4a0) Create stream\nI0323 00:44:43.328244 2703 log.go:172] (0xc000bba6e0) (0xc0007fd4a0) Stream added, broadcasting: 5\nI0323 00:44:43.329286 2703 log.go:172] (0xc000bba6e0) Reply frame received for 5\nI0323 00:44:43.395270 2703 log.go:172] (0xc000bba6e0) Data frame received for 5\nI0323 00:44:43.395292 2703 log.go:172] (0xc0007fd4a0) (5) Data frame handling\nI0323 00:44:43.395304 2703 log.go:172] (0xc0007fd4a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0323 00:44:43.395568 2703 log.go:172] (0xc000bba6e0) Data frame received for 3\nI0323 00:44:43.395588 2703 log.go:172] (0xc0007fd2c0) (3) Data frame handling\nI0323 00:44:43.395600 2703 log.go:172] (0xc0007fd2c0) (3) Data frame sent\nI0323 00:44:43.395639 2703 log.go:172] (0xc000bba6e0) Data frame received for 5\nI0323 00:44:43.395691 2703 log.go:172] (0xc0007fd4a0) (5) Data frame handling\nI0323 00:44:43.395863 2703 log.go:172] (0xc000bba6e0) Data frame received for 3\nI0323 00:44:43.395884 2703 log.go:172] (0xc0007fd2c0) (3) Data frame handling\nI0323 00:44:43.397800 2703 log.go:172] (0xc000bba6e0) Data frame received for 1\nI0323 00:44:43.397852 2703 log.go:172] (0xc0008e8000) (1) Data frame handling\nI0323 00:44:43.397900 2703 log.go:172] (0xc0008e8000) (1) Data frame sent\nI0323 00:44:43.397953 2703 log.go:172] (0xc000bba6e0) (0xc0008e8000) Stream removed, broadcasting: 1\nI0323 00:44:43.397997 2703 log.go:172] (0xc000bba6e0) Go away received\nI0323 00:44:43.398416 2703 log.go:172] (0xc000bba6e0) (0xc0008e8000) Stream removed, broadcasting: 1\nI0323 00:44:43.398442 2703 log.go:172] (0xc000bba6e0) (0xc0007fd2c0) Stream removed, broadcasting: 3\nI0323 00:44:43.398455 2703 log.go:172] (0xc000bba6e0) (0xc0007fd4a0) Stream removed, broadcasting: 5\n" Mar 23 00:44:43.404: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 00:44:43.404: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 00:44:43.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9909 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 00:44:43.612: INFO: stderr: "I0323 00:44:43.535580 2724 log.go:172] (0xc000a59080) (0xc000a14780) Create stream\nI0323 00:44:43.535640 2724 log.go:172] (0xc000a59080) (0xc000a14780) Stream added, broadcasting: 1\nI0323 00:44:43.542054 2724 log.go:172] (0xc000a59080) Reply frame received for 1\nI0323 00:44:43.542128 2724 log.go:172] (0xc000a59080) (0xc000821680) Create stream\nI0323 00:44:43.542149 2724 log.go:172] (0xc000a59080) (0xc000821680) Stream added, broadcasting: 3\nI0323 00:44:43.545939 2724 log.go:172] (0xc000a59080) Reply frame received for 3\nI0323 00:44:43.545963 2724 log.go:172] (0xc000a59080) (0xc000604aa0) Create stream\nI0323 00:44:43.545971 2724 log.go:172] (0xc000a59080) (0xc000604aa0) Stream added, broadcasting: 5\nI0323 00:44:43.547534 2724 log.go:172] (0xc000a59080) Reply frame received for 5\nI0323 00:44:43.604800 2724 log.go:172] (0xc000a59080) Data frame received for 3\nI0323 00:44:43.604843 2724 log.go:172] (0xc000821680) (3) Data frame handling\nI0323 00:44:43.604882 2724 log.go:172] (0xc000821680) (3) Data frame sent\nI0323 00:44:43.604906 2724 log.go:172] (0xc000a59080) Data frame received for 3\nI0323 00:44:43.604922 2724 log.go:172] (0xc000821680) (3) Data frame handling\nI0323 00:44:43.605309 2724 log.go:172] (0xc000a59080) Data frame received for 5\nI0323 00:44:43.605348 2724 log.go:172] (0xc000604aa0) (5) Data frame handling\nI0323 00:44:43.605381 2724 log.go:172] (0xc000604aa0) (5) Data frame sent\nI0323 00:44:43.605406 2724 log.go:172] (0xc000a59080) Data frame received for 5\nI0323 00:44:43.605429 2724 log.go:172] (0xc000604aa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0323 00:44:43.607108 2724 log.go:172] (0xc000a59080) Data frame received for 1\nI0323 00:44:43.607130 2724 log.go:172] (0xc000a14780) (1) Data frame handling\nI0323 00:44:43.607144 2724 log.go:172] (0xc000a14780) (1) Data frame sent\nI0323 00:44:43.607163 2724 log.go:172] (0xc000a59080) (0xc000a14780) Stream removed, broadcasting: 1\nI0323 00:44:43.607186 2724 log.go:172] (0xc000a59080) Go away received\nI0323 00:44:43.607591 2724 log.go:172] (0xc000a59080) (0xc000a14780) Stream removed, broadcasting: 1\nI0323 00:44:43.607615 2724 log.go:172] (0xc000a59080) (0xc000821680) Stream removed, broadcasting: 3\nI0323 00:44:43.607626 2724 log.go:172] (0xc000a59080) (0xc000604aa0) Stream removed, broadcasting: 5\n" Mar 23 00:44:43.612: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 00:44:43.612: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 00:44:43.616: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:44:43.616: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 23 00:44:43.616: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 23 00:44:43.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9909 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 00:44:43.824: INFO: stderr: "I0323 00:44:43.755274 2746 log.go:172] (0xc00094d290) (0xc000a00780) Create stream\nI0323 00:44:43.755346 2746 log.go:172] (0xc00094d290) (0xc000a00780) Stream added, broadcasting: 1\nI0323 00:44:43.760083 2746 log.go:172] (0xc00094d290) Reply frame received for 1\nI0323 00:44:43.760132 2746 log.go:172] (0xc00094d290) (0xc0005f7720) Create stream\nI0323 00:44:43.760146 2746 log.go:172] (0xc00094d290) (0xc0005f7720) Stream added, broadcasting: 3\nI0323 00:44:43.761075 2746 log.go:172] (0xc00094d290) Reply frame received for 3\nI0323 00:44:43.761205 2746 log.go:172] (0xc00094d290) (0xc0004f0b40) Create stream\nI0323 00:44:43.761219 2746 log.go:172] (0xc00094d290) (0xc0004f0b40) Stream added, broadcasting: 5\nI0323 00:44:43.762168 2746 log.go:172] (0xc00094d290) Reply frame received for 5\nI0323 00:44:43.817301 2746 log.go:172] (0xc00094d290) Data frame received for 3\nI0323 00:44:43.817343 2746 log.go:172] (0xc0005f7720) (3) Data frame handling\nI0323 00:44:43.817369 2746 log.go:172] (0xc0005f7720) (3) Data frame sent\nI0323 00:44:43.817376 2746 log.go:172] (0xc00094d290) Data frame received for 3\nI0323 00:44:43.817382 2746 log.go:172] (0xc0005f7720) (3) Data frame handling\nI0323 00:44:43.817405 2746 log.go:172] (0xc00094d290) Data frame received for 5\nI0323 00:44:43.817411 2746 log.go:172] (0xc0004f0b40) (5) Data frame handling\nI0323 00:44:43.817423 2746 log.go:172] (0xc0004f0b40) (5) Data frame sent\nI0323 00:44:43.817429 2746 log.go:172] (0xc00094d290) Data frame received for 5\nI0323 00:44:43.817435 2746 log.go:172] (0xc0004f0b40) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 00:44:43.819222 2746 log.go:172] (0xc00094d290) Data frame received for 1\nI0323 00:44:43.819252 2746 log.go:172] (0xc000a00780) (1) Data frame handling\nI0323 00:44:43.819273 2746 log.go:172] (0xc000a00780) (1) Data frame sent\nI0323 00:44:43.819291 2746 log.go:172] (0xc00094d290) (0xc000a00780) Stream removed, broadcasting: 1\nI0323 00:44:43.819308 2746 log.go:172] (0xc00094d290) Go away received\nI0323 00:44:43.819682 2746 log.go:172] (0xc00094d290) (0xc000a00780) Stream removed, broadcasting: 1\nI0323 00:44:43.819708 2746 log.go:172] (0xc00094d290) (0xc0005f7720) Stream removed, broadcasting: 3\nI0323 00:44:43.819726 2746 log.go:172] (0xc00094d290) (0xc0004f0b40) Stream removed, broadcasting: 5\n" Mar 23 00:44:43.824: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 00:44:43.824: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 00:44:43.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9909 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 00:44:44.061: INFO: stderr: "I0323 00:44:43.955209 2768 log.go:172] (0xc000a0ebb0) (0xc0009043c0) Create stream\nI0323 00:44:43.955298 2768 log.go:172] (0xc000a0ebb0) (0xc0009043c0) Stream added, broadcasting: 1\nI0323 00:44:43.958755 2768 log.go:172] (0xc000a0ebb0) Reply frame received for 1\nI0323 00:44:43.958799 2768 log.go:172] (0xc000a0ebb0) (0xc0009e40a0) Create stream\nI0323 00:44:43.958809 2768 log.go:172] (0xc000a0ebb0) (0xc0009e40a0) Stream added, broadcasting: 3\nI0323 00:44:43.959793 2768 log.go:172] (0xc000a0ebb0) Reply frame received for 3\nI0323 00:44:43.959837 2768 log.go:172] (0xc000a0ebb0) (0xc000904460) Create stream\nI0323 00:44:43.959865 2768 log.go:172] (0xc000a0ebb0) (0xc000904460) Stream added, broadcasting: 5\nI0323 00:44:43.960820 2768 log.go:172] (0xc000a0ebb0) Reply frame received for 5\nI0323 00:44:44.023028 2768 log.go:172] (0xc000a0ebb0) Data frame received for 5\nI0323 00:44:44.023055 2768 log.go:172] (0xc000904460) (5) Data frame handling\nI0323 00:44:44.023072 2768 log.go:172] (0xc000904460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 00:44:44.054219 2768 log.go:172] (0xc000a0ebb0) Data frame received for 3\nI0323 00:44:44.054241 2768 log.go:172] (0xc0009e40a0) (3) Data frame handling\nI0323 00:44:44.054262 2768 log.go:172] (0xc0009e40a0) (3) Data frame sent\nI0323 00:44:44.054508 2768 log.go:172] (0xc000a0ebb0) Data frame received for 3\nI0323 00:44:44.054543 2768 log.go:172] (0xc0009e40a0) (3) Data frame handling\nI0323 00:44:44.054692 2768 log.go:172] (0xc000a0ebb0) Data frame received for 5\nI0323 00:44:44.054742 2768 log.go:172] (0xc000904460) (5) Data frame handling\nI0323 00:44:44.056594 2768 log.go:172] (0xc000a0ebb0) Data frame received for 1\nI0323 00:44:44.056614 2768 log.go:172] (0xc0009043c0) (1) Data frame handling\nI0323 00:44:44.056631 2768 log.go:172] (0xc0009043c0) (1) Data frame sent\nI0323 00:44:44.056646 2768 log.go:172] (0xc000a0ebb0) (0xc0009043c0) Stream removed, broadcasting: 1\nI0323 00:44:44.056778 2768 log.go:172] (0xc000a0ebb0) Go away received\nI0323 00:44:44.056947 2768 log.go:172] (0xc000a0ebb0) (0xc0009043c0) Stream removed, broadcasting: 1\nI0323 00:44:44.056967 2768 log.go:172] (0xc000a0ebb0) (0xc0009e40a0) Stream removed, broadcasting: 3\nI0323 00:44:44.056974 2768 log.go:172] (0xc000a0ebb0) (0xc000904460) Stream removed, broadcasting: 5\n" Mar 23 00:44:44.061: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 00:44:44.061: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 00:44:44.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9909 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 00:44:44.300: INFO: stderr: "I0323 00:44:44.194743 2788 log.go:172] (0xc0008b8840) (0xc000673540) Create stream\nI0323 00:44:44.194800 2788 log.go:172] (0xc0008b8840) (0xc000673540) Stream added, broadcasting: 1\nI0323 00:44:44.197739 2788 log.go:172] (0xc0008b8840) Reply frame received for 1\nI0323 00:44:44.197784 2788 log.go:172] (0xc0008b8840) (0xc000898000) Create stream\nI0323 00:44:44.197797 2788 log.go:172] (0xc0008b8840) (0xc000898000) Stream added, broadcasting: 3\nI0323 00:44:44.198850 2788 log.go:172] (0xc0008b8840) Reply frame received for 3\nI0323 00:44:44.198923 2788 log.go:172] (0xc0008b8840) (0xc0008980a0) Create stream\nI0323 00:44:44.198949 2788 log.go:172] (0xc0008b8840) (0xc0008980a0) Stream added, broadcasting: 5\nI0323 00:44:44.199982 2788 log.go:172] (0xc0008b8840) Reply frame received for 5\nI0323 00:44:44.269459 2788 log.go:172] (0xc0008b8840) Data frame received for 5\nI0323 00:44:44.269493 2788 log.go:172] (0xc0008980a0) (5) Data frame handling\nI0323 00:44:44.269515 2788 log.go:172] (0xc0008980a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 00:44:44.294412 2788 log.go:172] (0xc0008b8840) Data frame received for 3\nI0323 00:44:44.294494 2788 log.go:172] (0xc000898000) (3) Data frame handling\nI0323 00:44:44.294511 2788 log.go:172] (0xc000898000) (3) Data frame sent\nI0323 00:44:44.294524 2788 log.go:172] (0xc0008b8840) Data frame received for 3\nI0323 00:44:44.294547 2788 log.go:172] (0xc000898000) (3) Data frame handling\nI0323 00:44:44.294609 2788 log.go:172] (0xc0008b8840) Data frame received for 5\nI0323 00:44:44.294654 2788 log.go:172] (0xc0008980a0) (5) Data frame handling\nI0323 00:44:44.295867 2788 log.go:172] (0xc0008b8840) Data frame received for 1\nI0323 00:44:44.295889 2788 log.go:172] (0xc000673540) (1) Data frame handling\nI0323 00:44:44.295897 2788 log.go:172] (0xc000673540) (1) Data frame sent\nI0323 00:44:44.295906 2788 log.go:172] (0xc0008b8840) (0xc000673540) Stream removed, broadcasting: 1\nI0323 00:44:44.296091 2788 log.go:172] (0xc0008b8840) Go away received\nI0323 00:44:44.296164 2788 log.go:172] (0xc0008b8840) (0xc000673540) Stream removed, broadcasting: 1\nI0323 00:44:44.296181 2788 log.go:172] (0xc0008b8840) (0xc000898000) Stream removed, broadcasting: 3\nI0323 00:44:44.296190 2788 log.go:172] (0xc0008b8840) (0xc0008980a0) Stream removed, broadcasting: 5\n" Mar 23 00:44:44.300: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 00:44:44.300: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 00:44:44.300: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 00:44:44.303: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 23 00:44:54.311: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 23 00:44:54.311: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 23 00:44:54.311: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 23 00:44:54.326: INFO: POD NODE PHASE GRACE CONDITIONS Mar 23 00:44:54.326: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:12 +0000 UTC }] Mar 23 00:44:54.326: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC }] Mar 23 00:44:54.326: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC }] Mar 23 00:44:54.326: INFO: Mar 23 00:44:54.326: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 23 00:44:55.332: INFO: POD NODE PHASE GRACE CONDITIONS Mar 23 00:44:55.332: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:12 +0000 UTC }] Mar 23 00:44:55.332: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC }] Mar 23 00:44:55.332: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC }] Mar 23 00:44:55.332: INFO: Mar 23 00:44:55.332: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 23 00:44:56.337: INFO: POD NODE PHASE GRACE CONDITIONS Mar 23 00:44:56.337: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:12 +0000 UTC }] Mar 23 00:44:56.337: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC }] Mar 23 00:44:56.337: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC }] Mar 23 00:44:56.337: INFO: Mar 23 00:44:56.337: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 23 00:44:57.377: INFO: POD NODE PHASE GRACE CONDITIONS Mar 23 00:44:57.377: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:12 +0000 UTC }] Mar 23 00:44:57.377: INFO: ss-2 latest-worker2 Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-23 00:44:32 +0000 UTC }] Mar 23 00:44:57.377: INFO: Mar 23 00:44:57.377: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 23 00:44:58.382: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.940772375s Mar 23 00:44:59.386: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.93648149s Mar 23 00:45:00.390: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.932226418s Mar 23 00:45:01.394: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.928300862s Mar 23 00:45:02.412: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.92419883s Mar 23 00:45:03.416: INFO: Verifying statefulset ss doesn't scale past 0 for another 906.054361ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9909 Mar 23 00:45:04.419: INFO: Scaling statefulset ss to 0 Mar 23 00:45:04.429: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 23 00:45:04.431: INFO: Deleting all statefulset in ns statefulset-9909 Mar 23 00:45:04.433: INFO: Scaling statefulset ss to 0 Mar 23 00:45:04.442: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 00:45:04.445: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:45:04.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9909" for this suite. • [SLOW TEST:52.059 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":244,"skipped":4224,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:45:04.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Mar 23 00:45:04.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2147' Mar 23 00:45:04.839: INFO: stderr: "" Mar 23 00:45:04.839: INFO: stdout: "pod/pause created\n" Mar 23 00:45:04.839: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 23 00:45:04.839: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2147" to be "running and ready" Mar 23 00:45:04.859: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 19.936961ms Mar 23 00:45:06.863: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024069043s Mar 23 00:45:08.866: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.027345028s Mar 23 00:45:08.866: INFO: Pod "pause" satisfied condition "running and ready" Mar 23 00:45:08.866: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Mar 23 00:45:08.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2147' Mar 23 00:45:08.966: INFO: stderr: "" Mar 23 00:45:08.966: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 23 00:45:08.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2147' Mar 23 00:45:09.060: INFO: stderr: "" Mar 23 00:45:09.060: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 23 00:45:09.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2147' Mar 23 00:45:09.145: INFO: stderr: "" Mar 23 00:45:09.145: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 23 00:45:09.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2147' Mar 23 00:45:09.237: INFO: stderr: "" Mar 23 00:45:09.237: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Mar 23 00:45:09.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2147' Mar 23 00:45:09.355: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 23 00:45:09.355: INFO: stdout: "pod \"pause\" force deleted\n" Mar 23 00:45:09.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2147' Mar 23 00:45:09.445: INFO: stderr: "No resources found in kubectl-2147 namespace.\n" Mar 23 00:45:09.445: INFO: stdout: "" Mar 23 00:45:09.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2147 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 23 00:45:09.543: INFO: stderr: "" Mar 23 00:45:09.543: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:45:09.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2147" for this suite. • [SLOW TEST:5.124 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":245,"skipped":4246,"failed":0} [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:45:09.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 23 00:45:09.906: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:45:09.924: INFO: Number of nodes with available pods: 0 Mar 23 00:45:09.924: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:45:10.929: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:45:10.932: INFO: Number of nodes with available pods: 0 Mar 23 00:45:10.932: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:45:11.937: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:45:11.941: INFO: Number of nodes with available pods: 0 Mar 23 00:45:11.941: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:45:12.928: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:45:12.932: INFO: Number of nodes with available pods: 0 Mar 23 00:45:12.932: INFO: Node latest-worker is running more than one daemon pod Mar 23 00:45:13.929: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:45:13.932: INFO: Number of nodes with available pods: 2 Mar 23 00:45:13.932: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 23 00:45:13.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 23 00:45:13.988: INFO: Number of nodes with available pods: 2 Mar 23 00:45:13.988: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6776, will wait for the garbage collector to delete the pods Mar 23 00:45:15.226: INFO: Deleting DaemonSet.extensions daemon-set took: 5.130181ms Mar 23 00:45:15.426: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.219997ms Mar 23 00:45:23.038: INFO: Number of nodes with available pods: 0 Mar 23 00:45:23.038: INFO: Number of running nodes: 0, number of available pods: 0 Mar 23 00:45:23.040: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6776/daemonsets","resourceVersion":"2026593"},"items":null} Mar 23 00:45:23.043: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6776/pods","resourceVersion":"2026593"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:45:23.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6776" for this suite. • [SLOW TEST:13.489 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":246,"skipped":4246,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:45:23.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:45:27.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4956" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":247,"skipped":4256,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:45:28.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 23 00:45:28.092: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Mar 23 00:45:28.757: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 23 00:45:30.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521128, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521128, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521128, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521128, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 00:45:33.488: INFO: Waited 520.78418ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:45:33.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5499" for this suite. • [SLOW TEST:6.106 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":248,"skipped":4262,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:45:34.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-2505 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2505 to expose endpoints map[] Mar 23 00:45:34.296: INFO: Get endpoints failed (2.537435ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 23 00:45:35.302: INFO: successfully validated that service multi-endpoint-test in namespace services-2505 exposes endpoints map[] (1.008287531s elapsed) STEP: Creating pod pod1 in namespace services-2505 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2505 to expose endpoints map[pod1:[100]] Mar 23 00:45:39.416: INFO: successfully validated that service multi-endpoint-test in namespace services-2505 exposes endpoints map[pod1:[100]] (4.107758448s elapsed) STEP: Creating pod pod2 in namespace services-2505 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2505 to expose endpoints map[pod1:[100] pod2:[101]] Mar 23 00:45:42.487: INFO: successfully validated that service multi-endpoint-test in namespace services-2505 exposes endpoints map[pod1:[100] pod2:[101]] (3.068546591s elapsed) STEP: Deleting pod pod1 in namespace services-2505 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2505 to expose endpoints map[pod2:[101]] Mar 23 00:45:43.525: INFO: successfully validated that service multi-endpoint-test in namespace services-2505 exposes endpoints map[pod2:[101]] (1.033530898s elapsed) STEP: Deleting pod pod2 in namespace services-2505 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2505 to expose endpoints map[] Mar 23 00:45:44.539: INFO: successfully validated that service multi-endpoint-test in namespace services-2505 exposes endpoints map[] (1.009159015s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:45:44.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2505" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.602 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":249,"skipped":4278,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:45:44.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 23 00:45:44.865: INFO: Waiting up to 5m0s for pod "pod-99ce074c-5ecc-445c-af64-cbd05d6375a5" in namespace "emptydir-1450" to be "Succeeded or Failed" Mar 23 00:45:44.887: INFO: Pod "pod-99ce074c-5ecc-445c-af64-cbd05d6375a5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.879428ms Mar 23 00:45:46.891: INFO: Pod "pod-99ce074c-5ecc-445c-af64-cbd05d6375a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02558531s Mar 23 00:45:48.894: INFO: Pod "pod-99ce074c-5ecc-445c-af64-cbd05d6375a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029322332s STEP: Saw pod success Mar 23 00:45:48.894: INFO: Pod "pod-99ce074c-5ecc-445c-af64-cbd05d6375a5" satisfied condition "Succeeded or Failed" Mar 23 00:45:48.897: INFO: Trying to get logs from node latest-worker2 pod pod-99ce074c-5ecc-445c-af64-cbd05d6375a5 container test-container: STEP: delete the pod Mar 23 00:45:48.971: INFO: Waiting for pod pod-99ce074c-5ecc-445c-af64-cbd05d6375a5 to disappear Mar 23 00:45:48.991: INFO: Pod pod-99ce074c-5ecc-445c-af64-cbd05d6375a5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:45:48.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1450" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4290,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:45:48.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8337 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8337 I0323 00:45:49.135709 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8337, replica count: 2 I0323 00:45:52.186194 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 00:45:55.186462 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 23 00:45:55.186: INFO: Creating new exec pod Mar 23 00:46:00.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8337 execpoddhxfl -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 23 00:46:00.441: INFO: stderr: "I0323 00:46:00.346442 2979 log.go:172] (0xc00099c630) (0xc0009c0000) Create stream\nI0323 00:46:00.346515 2979 log.go:172] (0xc00099c630) (0xc0009c0000) Stream added, broadcasting: 1\nI0323 00:46:00.349796 2979 log.go:172] (0xc00099c630) Reply frame received for 1\nI0323 00:46:00.349830 2979 log.go:172] (0xc00099c630) (0xc0009c00a0) Create stream\nI0323 00:46:00.349841 2979 log.go:172] (0xc00099c630) (0xc0009c00a0) Stream added, broadcasting: 3\nI0323 00:46:00.350830 2979 log.go:172] (0xc00099c630) Reply frame received for 3\nI0323 00:46:00.350885 2979 log.go:172] (0xc00099c630) (0xc000a5c000) Create stream\nI0323 00:46:00.350917 2979 log.go:172] (0xc00099c630) (0xc000a5c000) Stream added, broadcasting: 5\nI0323 00:46:00.351884 2979 log.go:172] (0xc00099c630) Reply frame received for 5\nI0323 00:46:00.436673 2979 log.go:172] (0xc00099c630) Data frame received for 3\nI0323 00:46:00.436697 2979 log.go:172] (0xc0009c00a0) (3) Data frame handling\nI0323 00:46:00.436729 2979 log.go:172] (0xc00099c630) Data frame received for 5\nI0323 00:46:00.436757 2979 log.go:172] (0xc000a5c000) (5) Data frame handling\nI0323 00:46:00.436773 2979 log.go:172] (0xc000a5c000) (5) Data frame sent\nI0323 00:46:00.436782 2979 log.go:172] (0xc00099c630) Data frame received for 5\nI0323 00:46:00.436795 2979 log.go:172] (0xc000a5c000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0323 00:46:00.438334 2979 log.go:172] (0xc00099c630) Data frame received for 1\nI0323 00:46:00.438361 2979 log.go:172] (0xc0009c0000) (1) Data frame handling\nI0323 00:46:00.438376 2979 log.go:172] (0xc0009c0000) (1) Data frame sent\nI0323 00:46:00.438407 2979 log.go:172] (0xc00099c630) (0xc0009c0000) Stream removed, broadcasting: 1\nI0323 00:46:00.438428 2979 log.go:172] (0xc00099c630) Go away received\nI0323 00:46:00.438682 2979 log.go:172] (0xc00099c630) (0xc0009c0000) Stream removed, broadcasting: 1\nI0323 00:46:00.438698 2979 log.go:172] (0xc00099c630) (0xc0009c00a0) Stream removed, broadcasting: 3\nI0323 00:46:00.438706 2979 log.go:172] (0xc00099c630) (0xc000a5c000) Stream removed, broadcasting: 5\n" Mar 23 00:46:00.441: INFO: stdout: "" Mar 23 00:46:00.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8337 execpoddhxfl -- /bin/sh -x -c nc -zv -t -w 2 10.96.220.45 80' Mar 23 00:46:00.643: INFO: stderr: "I0323 00:46:00.570216 3000 log.go:172] (0xc0003c7970) (0xc000900000) Create stream\nI0323 00:46:00.570278 3000 log.go:172] (0xc0003c7970) (0xc000900000) Stream added, broadcasting: 1\nI0323 00:46:00.573318 3000 log.go:172] (0xc0003c7970) Reply frame received for 1\nI0323 00:46:00.573356 3000 log.go:172] (0xc0003c7970) (0xc0006db220) Create stream\nI0323 00:46:00.573381 3000 log.go:172] (0xc0003c7970) (0xc0006db220) Stream added, broadcasting: 3\nI0323 00:46:00.574644 3000 log.go:172] (0xc0003c7970) Reply frame received for 3\nI0323 00:46:00.574723 3000 log.go:172] (0xc0003c7970) (0xc000406000) Create stream\nI0323 00:46:00.574754 3000 log.go:172] (0xc0003c7970) (0xc000406000) Stream added, broadcasting: 5\nI0323 00:46:00.575918 3000 log.go:172] (0xc0003c7970) Reply frame received for 5\nI0323 00:46:00.637840 3000 log.go:172] (0xc0003c7970) Data frame received for 3\nI0323 00:46:00.637891 3000 log.go:172] (0xc0006db220) (3) Data frame handling\nI0323 00:46:00.637923 3000 log.go:172] (0xc0003c7970) Data frame received for 5\nI0323 00:46:00.637940 3000 log.go:172] (0xc000406000) (5) Data frame handling\nI0323 00:46:00.637960 3000 log.go:172] (0xc000406000) (5) Data frame sent\nI0323 00:46:00.637979 3000 log.go:172] (0xc0003c7970) Data frame received for 5\nI0323 00:46:00.637996 3000 log.go:172] (0xc000406000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.220.45 80\nConnection to 10.96.220.45 80 port [tcp/http] succeeded!\nI0323 00:46:00.639314 3000 log.go:172] (0xc0003c7970) Data frame received for 1\nI0323 00:46:00.639358 3000 log.go:172] (0xc000900000) (1) Data frame handling\nI0323 00:46:00.639385 3000 log.go:172] (0xc000900000) (1) Data frame sent\nI0323 00:46:00.639432 3000 log.go:172] (0xc0003c7970) (0xc000900000) Stream removed, broadcasting: 1\nI0323 00:46:00.639475 3000 log.go:172] (0xc0003c7970) Go away received\nI0323 00:46:00.639782 3000 log.go:172] (0xc0003c7970) (0xc000900000) Stream removed, broadcasting: 1\nI0323 00:46:00.639797 3000 log.go:172] (0xc0003c7970) (0xc0006db220) Stream removed, broadcasting: 3\nI0323 00:46:00.639805 3000 log.go:172] (0xc0003c7970) (0xc000406000) Stream removed, broadcasting: 5\n" Mar 23 00:46:00.643: INFO: stdout: "" Mar 23 00:46:00.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8337 execpoddhxfl -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31640' Mar 23 00:46:00.847: INFO: stderr: "I0323 00:46:00.775805 3021 log.go:172] (0xc0008fa210) (0xc000665400) Create stream\nI0323 00:46:00.775855 3021 log.go:172] (0xc0008fa210) (0xc000665400) Stream added, broadcasting: 1\nI0323 00:46:00.778982 3021 log.go:172] (0xc0008fa210) Reply frame received for 1\nI0323 00:46:00.779048 3021 log.go:172] (0xc0008fa210) (0xc0006654a0) Create stream\nI0323 00:46:00.779074 3021 log.go:172] (0xc0008fa210) (0xc0006654a0) Stream added, broadcasting: 3\nI0323 00:46:00.780221 3021 log.go:172] (0xc0008fa210) Reply frame received for 3\nI0323 00:46:00.780260 3021 log.go:172] (0xc0008fa210) (0xc000665540) Create stream\nI0323 00:46:00.780272 3021 log.go:172] (0xc0008fa210) (0xc000665540) Stream added, broadcasting: 5\nI0323 00:46:00.781681 3021 log.go:172] (0xc0008fa210) Reply frame received for 5\nI0323 00:46:00.841246 3021 log.go:172] (0xc0008fa210) Data frame received for 3\nI0323 00:46:00.841309 3021 log.go:172] (0xc0006654a0) (3) Data frame handling\nI0323 00:46:00.841345 3021 log.go:172] (0xc0008fa210) Data frame received for 5\nI0323 00:46:00.841376 3021 log.go:172] (0xc000665540) (5) Data frame handling\nI0323 00:46:00.841405 3021 log.go:172] (0xc000665540) (5) Data frame sent\nI0323 00:46:00.841424 3021 log.go:172] (0xc0008fa210) Data frame received for 5\nI0323 00:46:00.841439 3021 log.go:172] (0xc000665540) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31640\nConnection to 172.17.0.13 31640 port [tcp/31640] succeeded!\nI0323 00:46:00.843151 3021 log.go:172] (0xc0008fa210) Data frame received for 1\nI0323 00:46:00.843174 3021 log.go:172] (0xc000665400) (1) Data frame handling\nI0323 00:46:00.843202 3021 log.go:172] (0xc000665400) (1) Data frame sent\nI0323 00:46:00.843234 3021 log.go:172] (0xc0008fa210) (0xc000665400) Stream removed, broadcasting: 1\nI0323 00:46:00.843252 3021 log.go:172] (0xc0008fa210) Go away received\nI0323 00:46:00.843649 3021 log.go:172] (0xc0008fa210) (0xc000665400) Stream removed, broadcasting: 1\nI0323 00:46:00.843681 3021 log.go:172] (0xc0008fa210) (0xc0006654a0) Stream removed, broadcasting: 3\nI0323 00:46:00.843708 3021 log.go:172] (0xc0008fa210) (0xc000665540) Stream removed, broadcasting: 5\n" Mar 23 00:46:00.848: INFO: stdout: "" Mar 23 00:46:00.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8337 execpoddhxfl -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31640' Mar 23 00:46:01.049: INFO: stderr: "I0323 00:46:00.976270 3044 log.go:172] (0xc00003a0b0) (0xc000584000) Create stream\nI0323 00:46:00.976330 3044 log.go:172] (0xc00003a0b0) (0xc000584000) Stream added, broadcasting: 1\nI0323 00:46:00.979352 3044 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0323 00:46:00.979413 3044 log.go:172] (0xc00003a0b0) (0xc0005ec000) Create stream\nI0323 00:46:00.979439 3044 log.go:172] (0xc00003a0b0) (0xc0005ec000) Stream added, broadcasting: 3\nI0323 00:46:00.980442 3044 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0323 00:46:00.980474 3044 log.go:172] (0xc00003a0b0) (0xc0005ec140) Create stream\nI0323 00:46:00.980483 3044 log.go:172] (0xc00003a0b0) (0xc0005ec140) Stream added, broadcasting: 5\nI0323 00:46:00.981684 3044 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0323 00:46:01.043649 3044 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0323 00:46:01.043686 3044 log.go:172] (0xc0005ec000) (3) Data frame handling\nI0323 00:46:01.043713 3044 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0323 00:46:01.043730 3044 log.go:172] (0xc0005ec140) (5) Data frame handling\nI0323 00:46:01.043743 3044 log.go:172] (0xc0005ec140) (5) Data frame sent\nI0323 00:46:01.043762 3044 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0323 00:46:01.043778 3044 log.go:172] (0xc0005ec140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31640\nConnection to 172.17.0.12 31640 port [tcp/31640] succeeded!\nI0323 00:46:01.044988 3044 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0323 00:46:01.045026 3044 log.go:172] (0xc000584000) (1) Data frame handling\nI0323 00:46:01.045049 3044 log.go:172] (0xc000584000) (1) Data frame sent\nI0323 00:46:01.045074 3044 log.go:172] (0xc00003a0b0) (0xc000584000) Stream removed, broadcasting: 1\nI0323 00:46:01.045102 3044 log.go:172] (0xc00003a0b0) Go away received\nI0323 00:46:01.045698 3044 log.go:172] (0xc00003a0b0) (0xc000584000) Stream removed, broadcasting: 1\nI0323 00:46:01.045726 3044 log.go:172] (0xc00003a0b0) (0xc0005ec000) Stream removed, broadcasting: 3\nI0323 00:46:01.045739 3044 log.go:172] (0xc00003a0b0) (0xc0005ec140) Stream removed, broadcasting: 5\n" Mar 23 00:46:01.049: INFO: stdout: "" Mar 23 00:46:01.049: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:46:01.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8337" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.140 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":251,"skipped":4296,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:46:01.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ee326641-f843-472b-a0fa-76bd803e4b36 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ee326641-f843-472b-a0fa-76bd803e4b36 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:46:09.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-728" for this suite. • [SLOW TEST:8.136 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4306,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:46:09.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-wplt STEP: Creating a pod to test atomic-volume-subpath Mar 23 00:46:09.341: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wplt" in namespace "subpath-4620" to be "Succeeded or Failed" Mar 23 00:46:09.366: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Pending", Reason="", readiness=false. Elapsed: 25.175344ms Mar 23 00:46:11.370: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028844862s Mar 23 00:46:13.374: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Running", Reason="", readiness=true. Elapsed: 4.032652222s Mar 23 00:46:15.407: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Running", Reason="", readiness=true. Elapsed: 6.066132406s Mar 23 00:46:17.411: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Running", Reason="", readiness=true. Elapsed: 8.070107305s Mar 23 00:46:19.415: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Running", Reason="", readiness=true. Elapsed: 10.073618364s Mar 23 00:46:21.418: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Running", Reason="", readiness=true. Elapsed: 12.077075322s Mar 23 00:46:23.423: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Running", Reason="", readiness=true. Elapsed: 14.081394042s Mar 23 00:46:25.427: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Running", Reason="", readiness=true. Elapsed: 16.085772605s Mar 23 00:46:27.431: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Running", Reason="", readiness=true. Elapsed: 18.090193696s Mar 23 00:46:29.436: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Running", Reason="", readiness=true. Elapsed: 20.094530998s Mar 23 00:46:31.440: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Running", Reason="", readiness=true. Elapsed: 22.098734043s Mar 23 00:46:33.444: INFO: Pod "pod-subpath-test-secret-wplt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.102717065s STEP: Saw pod success Mar 23 00:46:33.444: INFO: Pod "pod-subpath-test-secret-wplt" satisfied condition "Succeeded or Failed" Mar 23 00:46:33.446: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-wplt container test-container-subpath-secret-wplt: STEP: delete the pod Mar 23 00:46:33.478: INFO: Waiting for pod pod-subpath-test-secret-wplt to disappear Mar 23 00:46:33.494: INFO: Pod pod-subpath-test-secret-wplt no longer exists STEP: Deleting pod pod-subpath-test-secret-wplt Mar 23 00:46:33.494: INFO: Deleting pod "pod-subpath-test-secret-wplt" in namespace "subpath-4620" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:46:33.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4620" for this suite. • [SLOW TEST:24.230 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":253,"skipped":4316,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:46:33.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 23 00:46:38.108: INFO: Successfully updated pod "labelsupdate35901f6c-f09a-4078-927b-98d856348ed0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:46:40.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9688" for this suite. • [SLOW TEST:6.666 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4336,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:46:40.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:46:40.640: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:46:42.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521200, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521200, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521200, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521200, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:46:45.680: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:46:45.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-520" for this suite. STEP: Destroying namespace "webhook-520-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.769 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":255,"skipped":4347,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:46:45.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:46:47.822: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:46:49.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521207, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521207, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521207, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521207, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:46:52.863: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:46:53.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9032" for this suite. STEP: Destroying namespace "webhook-9032-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.491 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":256,"skipped":4362,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:46:53.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:46:54.544: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:46:56.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521214, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521214, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521214, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521214, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:46:59.613: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:46:59.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9054-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:47:00.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3342" for this suite. STEP: Destroying namespace "webhook-3342-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.423 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":257,"skipped":4366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:47:00.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:47:00.958: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-1cfd101a-320c-433e-b41e-5f9f856b4db3" in namespace "security-context-test-2644" to be "Succeeded or Failed" Mar 23 00:47:00.965: INFO: Pod "busybox-readonly-false-1cfd101a-320c-433e-b41e-5f9f856b4db3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.943263ms Mar 23 00:47:02.969: INFO: Pod "busybox-readonly-false-1cfd101a-320c-433e-b41e-5f9f856b4db3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011128469s Mar 23 00:47:04.989: INFO: Pod "busybox-readonly-false-1cfd101a-320c-433e-b41e-5f9f856b4db3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031186819s Mar 23 00:47:04.989: INFO: Pod "busybox-readonly-false-1cfd101a-320c-433e-b41e-5f9f856b4db3" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:47:04.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2644" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:47:04.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:47:05.080: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-3e0698cd-41e3-46e5-857e-ac0e3070172f" in namespace "security-context-test-2259" to be "Succeeded or Failed" Mar 23 00:47:05.115: INFO: Pod "alpine-nnp-false-3e0698cd-41e3-46e5-857e-ac0e3070172f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.473752ms Mar 23 00:47:07.119: INFO: Pod "alpine-nnp-false-3e0698cd-41e3-46e5-857e-ac0e3070172f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038314481s Mar 23 00:47:09.127: INFO: Pod "alpine-nnp-false-3e0698cd-41e3-46e5-857e-ac0e3070172f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046747645s Mar 23 00:47:09.127: INFO: Pod "alpine-nnp-false-3e0698cd-41e3-46e5-857e-ac0e3070172f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:47:09.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2259" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4421,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:47:09.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Mar 23 00:47:09.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Mar 23 00:47:09.390: INFO: stderr: "" Mar 23 00:47:09.390: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:47:09.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2217" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":260,"skipped":4436,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:47:09.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-e53a1a97-b2d3-49b9-8bb8-445ebd5a827d [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:47:09.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6652" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":261,"skipped":4439,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:47:09.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-c8c4cebb-9fe9-44f2-ae7e-39215adae9de STEP: Creating a pod to test consume secrets Mar 23 00:47:09.522: INFO: Waiting up to 5m0s for pod "pod-secrets-686466ed-1516-414a-a937-a605d2d716ba" in namespace "secrets-2801" to be "Succeeded or Failed" Mar 23 00:47:09.526: INFO: Pod "pod-secrets-686466ed-1516-414a-a937-a605d2d716ba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.501822ms Mar 23 00:47:11.530: INFO: Pod "pod-secrets-686466ed-1516-414a-a937-a605d2d716ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007887971s Mar 23 00:47:13.534: INFO: Pod "pod-secrets-686466ed-1516-414a-a937-a605d2d716ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011621914s STEP: Saw pod success Mar 23 00:47:13.534: INFO: Pod "pod-secrets-686466ed-1516-414a-a937-a605d2d716ba" satisfied condition "Succeeded or Failed" Mar 23 00:47:13.537: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-686466ed-1516-414a-a937-a605d2d716ba container secret-volume-test: STEP: delete the pod Mar 23 00:47:13.551: INFO: Waiting for pod pod-secrets-686466ed-1516-414a-a937-a605d2d716ba to disappear Mar 23 00:47:13.569: INFO: Pod pod-secrets-686466ed-1516-414a-a937-a605d2d716ba no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:47:13.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2801" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4448,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:47:13.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 00:47:13.982: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 00:47:15.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521234, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521234, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521234, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521233, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 00:47:19.013: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:47:19.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9304-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:47:20.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5994" for this suite. STEP: Destroying namespace "webhook-5994-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.678 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":263,"skipped":4454,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:47:20.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 23 00:47:20.351: INFO: Waiting up to 5m0s for pod "downward-api-bdc08937-a263-4a3f-89a3-45fb899ba439" in namespace "downward-api-3648" to be "Succeeded or Failed" Mar 23 00:47:20.354: INFO: Pod "downward-api-bdc08937-a263-4a3f-89a3-45fb899ba439": Phase="Pending", Reason="", readiness=false. Elapsed: 3.208551ms Mar 23 00:47:22.487: INFO: Pod "downward-api-bdc08937-a263-4a3f-89a3-45fb899ba439": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13594451s Mar 23 00:47:24.491: INFO: Pod "downward-api-bdc08937-a263-4a3f-89a3-45fb899ba439": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140442959s STEP: Saw pod success Mar 23 00:47:24.491: INFO: Pod "downward-api-bdc08937-a263-4a3f-89a3-45fb899ba439" satisfied condition "Succeeded or Failed" Mar 23 00:47:24.495: INFO: Trying to get logs from node latest-worker pod downward-api-bdc08937-a263-4a3f-89a3-45fb899ba439 container dapi-container: STEP: delete the pod Mar 23 00:47:24.516: INFO: Waiting for pod downward-api-bdc08937-a263-4a3f-89a3-45fb899ba439 to disappear Mar 23 00:47:24.520: INFO: Pod downward-api-bdc08937-a263-4a3f-89a3-45fb899ba439 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:47:24.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3648" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4472,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:47:24.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2631.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2631.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 23 00:47:28.638: INFO: DNS probes using dns-test-ee809174-f8a2-4042-a965-4b3dfd5753a1 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2631.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2631.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 23 00:47:34.737: INFO: File wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local from pod dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 23 00:47:34.741: INFO: File jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local from pod dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 23 00:47:34.741: INFO: Lookups using dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 failed for: [wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local] Mar 23 00:47:39.763: INFO: File wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local from pod dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 23 00:47:39.767: INFO: File jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local from pod dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 23 00:47:39.767: INFO: Lookups using dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 failed for: [wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local] Mar 23 00:47:44.746: INFO: File wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local from pod dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 23 00:47:44.749: INFO: File jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local from pod dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 23 00:47:44.749: INFO: Lookups using dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 failed for: [wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local] Mar 23 00:47:49.746: INFO: File wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local from pod dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 23 00:47:49.750: INFO: File jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local from pod dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 23 00:47:49.750: INFO: Lookups using dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 failed for: [wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local] Mar 23 00:47:54.746: INFO: File wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local from pod dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 23 00:47:54.750: INFO: File jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local from pod dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 23 00:47:54.750: INFO: Lookups using dns-2631/dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 failed for: [wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local] Mar 23 00:47:59.749: INFO: DNS probes using dns-test-cb33efee-afb7-4c8d-a527-6bba71cbce77 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2631.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2631.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2631.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2631.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 23 00:48:06.219: INFO: DNS probes using dns-test-7790e05c-5ba7-4d8c-a664-8fce669c01bd succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:48:06.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2631" for this suite. • [SLOW TEST:41.806 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":265,"skipped":4474,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:48:06.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-593d23d4-b37f-438e-a874-294a4d9b8388 STEP: Creating a pod to test consume configMaps Mar 23 00:48:06.663: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8177c6fa-e0be-4d19-bd13-d228f3fd1ee8" in namespace "projected-2112" to be "Succeeded or Failed" Mar 23 00:48:06.679: INFO: Pod "pod-projected-configmaps-8177c6fa-e0be-4d19-bd13-d228f3fd1ee8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.215934ms Mar 23 00:48:08.683: INFO: Pod "pod-projected-configmaps-8177c6fa-e0be-4d19-bd13-d228f3fd1ee8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019290995s Mar 23 00:48:10.962: INFO: Pod "pod-projected-configmaps-8177c6fa-e0be-4d19-bd13-d228f3fd1ee8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.298596441s STEP: Saw pod success Mar 23 00:48:10.962: INFO: Pod "pod-projected-configmaps-8177c6fa-e0be-4d19-bd13-d228f3fd1ee8" satisfied condition "Succeeded or Failed" Mar 23 00:48:10.969: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-8177c6fa-e0be-4d19-bd13-d228f3fd1ee8 container projected-configmap-volume-test: STEP: delete the pod Mar 23 00:48:11.053: INFO: Waiting for pod pod-projected-configmaps-8177c6fa-e0be-4d19-bd13-d228f3fd1ee8 to disappear Mar 23 00:48:11.088: INFO: Pod pod-projected-configmaps-8177c6fa-e0be-4d19-bd13-d228f3fd1ee8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:48:11.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2112" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4486,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:48:11.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:48:18.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6361" for this suite. • [SLOW TEST:7.104 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":267,"skipped":4490,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:48:18.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-ad9538e0-cbf7-42c7-9685-f07de846c081 STEP: Creating a pod to test consume configMaps Mar 23 00:48:18.493: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-80cc6887-1008-45c6-adca-2b21c6f582d7" in namespace "projected-1753" to be "Succeeded or Failed" Mar 23 00:48:18.497: INFO: Pod "pod-projected-configmaps-80cc6887-1008-45c6-adca-2b21c6f582d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035448ms Mar 23 00:48:20.502: INFO: Pod "pod-projected-configmaps-80cc6887-1008-45c6-adca-2b21c6f582d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008258987s Mar 23 00:48:22.506: INFO: Pod "pod-projected-configmaps-80cc6887-1008-45c6-adca-2b21c6f582d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012534233s STEP: Saw pod success Mar 23 00:48:22.506: INFO: Pod "pod-projected-configmaps-80cc6887-1008-45c6-adca-2b21c6f582d7" satisfied condition "Succeeded or Failed" Mar 23 00:48:22.509: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-80cc6887-1008-45c6-adca-2b21c6f582d7 container projected-configmap-volume-test: STEP: delete the pod Mar 23 00:48:22.542: INFO: Waiting for pod pod-projected-configmaps-80cc6887-1008-45c6-adca-2b21c6f582d7 to disappear Mar 23 00:48:22.558: INFO: Pod pod-projected-configmaps-80cc6887-1008-45c6-adca-2b21c6f582d7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:48:22.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1753" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4511,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:48:22.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-9dht STEP: Creating a pod to test atomic-volume-subpath Mar 23 00:48:22.643: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9dht" in namespace "subpath-850" to be "Succeeded or Failed" Mar 23 00:48:22.648: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261923ms Mar 23 00:48:24.652: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008908006s Mar 23 00:48:26.656: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Running", Reason="", readiness=true. Elapsed: 4.012846159s Mar 23 00:48:28.660: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Running", Reason="", readiness=true. Elapsed: 6.016637236s Mar 23 00:48:30.664: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Running", Reason="", readiness=true. Elapsed: 8.020445363s Mar 23 00:48:32.668: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Running", Reason="", readiness=true. Elapsed: 10.024539143s Mar 23 00:48:34.672: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Running", Reason="", readiness=true. Elapsed: 12.028750347s Mar 23 00:48:36.676: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Running", Reason="", readiness=true. Elapsed: 14.032805007s Mar 23 00:48:38.681: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Running", Reason="", readiness=true. Elapsed: 16.037386119s Mar 23 00:48:40.685: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Running", Reason="", readiness=true. Elapsed: 18.041936868s Mar 23 00:48:42.690: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Running", Reason="", readiness=true. Elapsed: 20.046186934s Mar 23 00:48:44.694: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Running", Reason="", readiness=true. Elapsed: 22.050491301s Mar 23 00:48:46.698: INFO: Pod "pod-subpath-test-configmap-9dht": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054336629s STEP: Saw pod success Mar 23 00:48:46.698: INFO: Pod "pod-subpath-test-configmap-9dht" satisfied condition "Succeeded or Failed" Mar 23 00:48:46.700: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-9dht container test-container-subpath-configmap-9dht: STEP: delete the pod Mar 23 00:48:46.723: INFO: Waiting for pod pod-subpath-test-configmap-9dht to disappear Mar 23 00:48:46.727: INFO: Pod pod-subpath-test-configmap-9dht no longer exists STEP: Deleting pod pod-subpath-test-configmap-9dht Mar 23 00:48:46.727: INFO: Deleting pod "pod-subpath-test-configmap-9dht" in namespace "subpath-850" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:48:46.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-850" for this suite. • [SLOW TEST:24.170 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":269,"skipped":4534,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:48:46.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-37500721-3e26-48f5-9cba-bd461e64e748 STEP: Creating a pod to test consume configMaps Mar 23 00:48:46.822: INFO: Waiting up to 5m0s for pod "pod-configmaps-54d2a675-c26b-424b-968c-289ebaa90828" in namespace "configmap-1002" to be "Succeeded or Failed" Mar 23 00:48:46.855: INFO: Pod "pod-configmaps-54d2a675-c26b-424b-968c-289ebaa90828": Phase="Pending", Reason="", readiness=false. Elapsed: 32.626277ms Mar 23 00:48:48.859: INFO: Pod "pod-configmaps-54d2a675-c26b-424b-968c-289ebaa90828": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036608245s Mar 23 00:48:50.863: INFO: Pod "pod-configmaps-54d2a675-c26b-424b-968c-289ebaa90828": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040960872s STEP: Saw pod success Mar 23 00:48:50.864: INFO: Pod "pod-configmaps-54d2a675-c26b-424b-968c-289ebaa90828" satisfied condition "Succeeded or Failed" Mar 23 00:48:50.867: INFO: Trying to get logs from node latest-worker pod pod-configmaps-54d2a675-c26b-424b-968c-289ebaa90828 container configmap-volume-test: STEP: delete the pod Mar 23 00:48:50.899: INFO: Waiting for pod pod-configmaps-54d2a675-c26b-424b-968c-289ebaa90828 to disappear Mar 23 00:48:50.907: INFO: Pod pod-configmaps-54d2a675-c26b-424b-968c-289ebaa90828 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:48:50.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1002" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4551,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:48:50.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-g5d7 STEP: Creating a pod to test atomic-volume-subpath Mar 23 00:48:51.026: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-g5d7" in namespace "subpath-8909" to be "Succeeded or Failed" Mar 23 00:48:51.069: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.356049ms Mar 23 00:48:53.072: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04562753s Mar 23 00:48:55.076: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.049755153s Mar 23 00:48:57.080: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Running", Reason="", readiness=true. Elapsed: 6.053268222s Mar 23 00:48:59.084: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Running", Reason="", readiness=true. Elapsed: 8.057405642s Mar 23 00:49:01.088: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Running", Reason="", readiness=true. Elapsed: 10.061483247s Mar 23 00:49:03.092: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Running", Reason="", readiness=true. Elapsed: 12.06585441s Mar 23 00:49:05.097: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Running", Reason="", readiness=true. Elapsed: 14.069962094s Mar 23 00:49:07.101: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Running", Reason="", readiness=true. Elapsed: 16.074588014s Mar 23 00:49:09.106: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Running", Reason="", readiness=true. Elapsed: 18.079313187s Mar 23 00:49:11.110: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Running", Reason="", readiness=true. Elapsed: 20.083627811s Mar 23 00:49:13.114: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Running", Reason="", readiness=true. Elapsed: 22.087554214s Mar 23 00:49:15.119: INFO: Pod "pod-subpath-test-configmap-g5d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.092167342s STEP: Saw pod success Mar 23 00:49:15.119: INFO: Pod "pod-subpath-test-configmap-g5d7" satisfied condition "Succeeded or Failed" Mar 23 00:49:15.122: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-g5d7 container test-container-subpath-configmap-g5d7: STEP: delete the pod Mar 23 00:49:15.152: INFO: Waiting for pod pod-subpath-test-configmap-g5d7 to disappear Mar 23 00:49:15.157: INFO: Pod pod-subpath-test-configmap-g5d7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-g5d7 Mar 23 00:49:15.157: INFO: Deleting pod "pod-subpath-test-configmap-g5d7" in namespace "subpath-8909" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:49:15.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8909" for this suite. • [SLOW TEST:24.252 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":271,"skipped":4560,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:49:15.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 23 00:49:15.231: INFO: Waiting up to 5m0s for pod "pod-e081b771-f532-4f15-8146-d1f779a91686" in namespace "emptydir-304" to be "Succeeded or Failed" Mar 23 00:49:15.235: INFO: Pod "pod-e081b771-f532-4f15-8146-d1f779a91686": Phase="Pending", Reason="", readiness=false. Elapsed: 3.855073ms Mar 23 00:49:17.238: INFO: Pod "pod-e081b771-f532-4f15-8146-d1f779a91686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007197838s Mar 23 00:49:19.242: INFO: Pod "pod-e081b771-f532-4f15-8146-d1f779a91686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010624913s STEP: Saw pod success Mar 23 00:49:19.242: INFO: Pod "pod-e081b771-f532-4f15-8146-d1f779a91686" satisfied condition "Succeeded or Failed" Mar 23 00:49:19.245: INFO: Trying to get logs from node latest-worker2 pod pod-e081b771-f532-4f15-8146-d1f779a91686 container test-container: STEP: delete the pod Mar 23 00:49:19.269: INFO: Waiting for pod pod-e081b771-f532-4f15-8146-d1f779a91686 to disappear Mar 23 00:49:19.308: INFO: Pod pod-e081b771-f532-4f15-8146-d1f779a91686 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:49:19.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-304" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4581,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:49:19.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:49:19.391: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:49:25.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1970" for this suite. • [SLOW TEST:6.323 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":273,"skipped":4585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:49:25.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 23 00:49:25.689: INFO: PodSpec: initContainers in spec.initContainers Mar 23 00:50:10.045: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bc6bace8-8df7-47c2-b737-c68444ebd0c4", GenerateName:"", Namespace:"init-container-9218", SelfLink:"/api/v1/namespaces/init-container-9218/pods/pod-init-bc6bace8-8df7-47c2-b737-c68444ebd0c4", UID:"327beec4-1c7e-4069-9f2e-4f144798a7d3", ResourceVersion:"2028660", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720521365, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"689539660"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-p5fkx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00659b780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p5fkx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p5fkx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p5fkx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0033f4798), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002438af0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0033f4820)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0033f4840)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0033f4848), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0033f484c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521365, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521365, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521365, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720521365, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.2.71", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.71"}}, StartTime:(*v1.Time)(0xc0045fbb00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002438bd0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002438c40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://cf91d2587ded488ea08d4a8c9229c86c8158d3e8fdc3beec89f4cfa327e0963e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0045fbb40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0045fbb20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0033f48cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:50:10.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9218" for this suite. • [SLOW TEST:44.496 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":274,"skipped":4626,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 00:50:10.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 00:50:10.180: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 23 00:50:13.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9959 create -f -' Mar 23 00:50:16.689: INFO: stderr: "" Mar 23 00:50:16.689: INFO: stdout: "e2e-test-crd-publish-openapi-3234-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 23 00:50:16.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9959 delete e2e-test-crd-publish-openapi-3234-crds test-foo' Mar 23 00:50:16.805: INFO: stderr: "" Mar 23 00:50:16.805: INFO: stdout: "e2e-test-crd-publish-openapi-3234-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 23 00:50:16.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9959 apply -f -' Mar 23 00:50:17.366: INFO: stderr: "" Mar 23 00:50:17.366: INFO: stdout: "e2e-test-crd-publish-openapi-3234-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 23 00:50:17.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9959 delete e2e-test-crd-publish-openapi-3234-crds test-foo' Mar 23 00:50:17.457: INFO: stderr: "" Mar 23 00:50:17.457: INFO: stdout: "e2e-test-crd-publish-openapi-3234-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 23 00:50:17.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9959 create -f -' Mar 23 00:50:17.698: INFO: rc: 1 Mar 23 00:50:17.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9959 apply -f -' Mar 23 00:50:17.921: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 23 00:50:17.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9959 create -f -' Mar 23 00:50:18.153: INFO: rc: 1 Mar 23 00:50:18.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9959 apply -f -' Mar 23 00:50:18.418: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 23 00:50:18.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3234-crds' Mar 23 00:50:18.672: INFO: stderr: "" Mar 23 00:50:18.672: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3234-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 23 00:50:18.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3234-crds.metadata' Mar 23 00:50:18.908: INFO: stderr: "" Mar 23 00:50:18.908: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3234-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 23 00:50:18.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3234-crds.spec' Mar 23 00:50:19.145: INFO: stderr: "" Mar 23 00:50:19.145: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3234-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 23 00:50:19.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3234-crds.spec.bars' Mar 23 00:50:19.377: INFO: stderr: "" Mar 23 00:50:19.377: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3234-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 23 00:50:19.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3234-crds.spec.bars2' Mar 23 00:50:19.642: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 00:50:21.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9959" for this suite. • [SLOW TEST:11.409 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":275,"skipped":4711,"failed":0} SSSSSSMar 23 00:50:21.546: INFO: Running AfterSuite actions on all nodes Mar 23 00:50:21.546: INFO: Running AfterSuite actions on node 1 Mar 23 00:50:21.546: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4458.816 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS