I0511 15:12:26.599355 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0511 15:12:26.599595 6 e2e.go:109] Starting e2e run "f9159da4-429b-4f56-aa03-5ee08cb43f79" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589209945 - Will randomize all specs Will run 278 of 4842 specs May 11 15:12:26.664: INFO: >>> kubeConfig: /root/.kube/config May 11 15:12:26.669: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 11 15:12:26.691: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 11 15:12:26.726: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 11 15:12:26.726: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 11 15:12:26.726: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 11 15:12:26.738: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 11 15:12:26.738: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 11 15:12:26.738: INFO: e2e test version: v1.17.4 May 11 15:12:26.739: INFO: kube-apiserver version: v1.17.2 May 11 15:12:26.739: INFO: >>> kubeConfig: /root/.kube/config May 11 15:12:26.743: INFO: Cluster IP family: ipv4 S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:12:26.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion May 11 15:12:26.849: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 11 15:12:26.857: INFO: Waiting up to 5m0s for pod "var-expansion-93393091-04e9-4129-8e9e-f2ce921892bf" in namespace "var-expansion-3423" to be "success or failure" May 11 15:12:26.860: INFO: Pod "var-expansion-93393091-04e9-4129-8e9e-f2ce921892bf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.526585ms May 11 15:12:28.879: INFO: Pod "var-expansion-93393091-04e9-4129-8e9e-f2ce921892bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022415977s May 11 15:12:30.883: INFO: Pod "var-expansion-93393091-04e9-4129-8e9e-f2ce921892bf": Phase="Running", Reason="", readiness=true. Elapsed: 4.026083216s May 11 15:12:32.887: INFO: Pod "var-expansion-93393091-04e9-4129-8e9e-f2ce921892bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030392534s STEP: Saw pod success May 11 15:12:32.887: INFO: Pod "var-expansion-93393091-04e9-4129-8e9e-f2ce921892bf" satisfied condition "success or failure" May 11 15:12:32.890: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-93393091-04e9-4129-8e9e-f2ce921892bf container dapi-container: STEP: delete the pod May 11 15:12:32.950: INFO: Waiting for pod var-expansion-93393091-04e9-4129-8e9e-f2ce921892bf to disappear May 11 15:12:32.968: INFO: Pod var-expansion-93393091-04e9-4129-8e9e-f2ce921892bf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:12:32.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3423" for this suite. • [SLOW TEST:6.233 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":1,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:12:32.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 15:12:33.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2676' May 11 15:12:35.816: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 15:12:35.816: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 11 15:12:35.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-2676' May 11 15:12:36.097: INFO: stderr: "" May 11 15:12:36.097: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:12:36.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2676" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":2,"skipped":4,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:12:36.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 11 15:12:36.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5096' May 11 15:12:36.555: INFO: stderr: "" May 11 15:12:36.555: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 15:12:36.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5096' May 11 15:12:36.661: INFO: stderr: "" May 11 15:12:36.661: INFO: stdout: "update-demo-nautilus-chjqb update-demo-nautilus-djvp4 " May 11 15:12:36.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chjqb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:12:36.782: INFO: stderr: "" May 11 15:12:36.782: INFO: stdout: "" May 11 15:12:36.782: INFO: update-demo-nautilus-chjqb is created but not running May 11 15:12:41.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5096' May 11 15:12:41.877: INFO: stderr: "" May 11 15:12:41.877: INFO: stdout: "update-demo-nautilus-chjqb update-demo-nautilus-djvp4 " May 11 15:12:41.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chjqb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:12:41.977: INFO: stderr: "" May 11 15:12:41.977: INFO: stdout: "true" May 11 15:12:41.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chjqb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:12:42.071: INFO: stderr: "" May 11 15:12:42.071: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 15:12:42.071: INFO: validating pod update-demo-nautilus-chjqb May 11 15:12:42.075: INFO: got data: { "image": "nautilus.jpg" } May 11 15:12:42.075: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 15:12:42.075: INFO: update-demo-nautilus-chjqb is verified up and running May 11 15:12:42.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djvp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:12:42.165: INFO: stderr: "" May 11 15:12:42.166: INFO: stdout: "" May 11 15:12:42.166: INFO: update-demo-nautilus-djvp4 is created but not running May 11 15:12:47.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5096' May 11 15:12:47.428: INFO: stderr: "" May 11 15:12:47.428: INFO: stdout: "update-demo-nautilus-chjqb update-demo-nautilus-djvp4 " May 11 15:12:47.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chjqb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:12:47.644: INFO: stderr: "" May 11 15:12:47.644: INFO: stdout: "true" May 11 15:12:47.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chjqb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:12:47.770: INFO: stderr: "" May 11 15:12:47.770: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 15:12:47.771: INFO: validating pod update-demo-nautilus-chjqb May 11 15:12:47.774: INFO: got data: { "image": "nautilus.jpg" } May 11 15:12:47.774: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 15:12:47.774: INFO: update-demo-nautilus-chjqb is verified up and running May 11 15:12:47.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djvp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:12:47.863: INFO: stderr: "" May 11 15:12:47.863: INFO: stdout: "true" May 11 15:12:47.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djvp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:12:47.961: INFO: stderr: "" May 11 15:12:47.961: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 15:12:47.961: INFO: validating pod update-demo-nautilus-djvp4 May 11 15:12:47.965: INFO: got data: { "image": "nautilus.jpg" } May 11 15:12:47.965: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 15:12:47.965: INFO: update-demo-nautilus-djvp4 is verified up and running STEP: rolling-update to new replication controller May 11 15:12:47.967: INFO: scanned /root for discovery docs: May 11 15:12:47.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5096' May 11 15:13:11.674: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 11 15:13:11.674: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 15:13:11.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5096' May 11 15:13:11.897: INFO: stderr: "" May 11 15:13:11.897: INFO: stdout: "update-demo-kitten-67b2n update-demo-kitten-smmhh " May 11 15:13:11.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-67b2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:13:12.148: INFO: stderr: "" May 11 15:13:12.148: INFO: stdout: "true" May 11 15:13:12.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-67b2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:13:12.370: INFO: stderr: "" May 11 15:13:12.370: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 11 15:13:12.370: INFO: validating pod update-demo-kitten-67b2n May 11 15:13:12.383: INFO: got data: { "image": "kitten.jpg" } May 11 15:13:12.383: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 11 15:13:12.383: INFO: update-demo-kitten-67b2n is verified up and running May 11 15:13:12.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-smmhh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:13:12.534: INFO: stderr: "" May 11 15:13:12.534: INFO: stdout: "true" May 11 15:13:12.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-smmhh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5096' May 11 15:13:12.996: INFO: stderr: "" May 11 15:13:12.996: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 11 15:13:12.996: INFO: validating pod update-demo-kitten-smmhh May 11 15:13:13.200: INFO: got data: { "image": "kitten.jpg" } May 11 15:13:13.200: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 11 15:13:13.200: INFO: update-demo-kitten-smmhh is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:13:13.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5096" for this suite. • [SLOW TEST:37.169 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":3,"skipped":14,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:13:13.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 15:13:15.149: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 15:13:17.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806795, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806795, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:13:19.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806795, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806795, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:13:24.002: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:13:24.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7110" for this suite. STEP: Destroying namespace "webhook-7110-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.979 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":4,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:13:27.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 15:13:28.414: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 15:13:30.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:13:32.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:13:34.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806808, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:13:37.553: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:13:37.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2962-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:13:38.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-854" for this suite. STEP: Destroying namespace "webhook-854-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.675 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":5,"skipped":36,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:13:38.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 15:13:39.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec8bd156-f00e-43a5-8be1-49d18506df4d" in namespace "downward-api-4051" to be "success or failure" May 11 15:13:39.634: INFO: Pod "downwardapi-volume-ec8bd156-f00e-43a5-8be1-49d18506df4d": Phase="Pending", Reason="", readiness=false. Elapsed: 335.106152ms May 11 15:13:41.773: INFO: Pod "downwardapi-volume-ec8bd156-f00e-43a5-8be1-49d18506df4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473882724s May 11 15:13:43.843: INFO: Pod "downwardapi-volume-ec8bd156-f00e-43a5-8be1-49d18506df4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.544012706s STEP: Saw pod success May 11 15:13:43.843: INFO: Pod "downwardapi-volume-ec8bd156-f00e-43a5-8be1-49d18506df4d" satisfied condition "success or failure" May 11 15:13:43.847: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ec8bd156-f00e-43a5-8be1-49d18506df4d container client-container: STEP: delete the pod May 11 15:13:43.948: INFO: Waiting for pod downwardapi-volume-ec8bd156-f00e-43a5-8be1-49d18506df4d to disappear May 11 15:13:44.019: INFO: Pod downwardapi-volume-ec8bd156-f00e-43a5-8be1-49d18506df4d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:13:44.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4051" for this suite. • [SLOW TEST:5.100 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":46,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:13:44.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 15:13:46.474: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 15:13:48.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806826, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806826, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806827, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806826, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:13:50.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806826, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806826, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806827, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806826, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:13:54.042: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:13:54.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8426" for this suite. STEP: Destroying namespace "webhook-8426-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.964 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":7,"skipped":54,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:13:55.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 15:13:56.960: INFO: Waiting up to 5m0s for pod "pod-8db5eecd-4d54-438e-9890-68044ea53938" in namespace "emptydir-6111" to be "success or failure" May 11 15:13:57.036: INFO: Pod "pod-8db5eecd-4d54-438e-9890-68044ea53938": Phase="Pending", Reason="", readiness=false. Elapsed: 75.560543ms May 11 15:13:59.040: INFO: Pod "pod-8db5eecd-4d54-438e-9890-68044ea53938": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079866408s May 11 15:14:01.140: INFO: Pod "pod-8db5eecd-4d54-438e-9890-68044ea53938": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180069359s May 11 15:14:03.143: INFO: Pod "pod-8db5eecd-4d54-438e-9890-68044ea53938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183367982s STEP: Saw pod success May 11 15:14:03.143: INFO: Pod "pod-8db5eecd-4d54-438e-9890-68044ea53938" satisfied condition "success or failure" May 11 15:14:03.145: INFO: Trying to get logs from node jerma-worker pod pod-8db5eecd-4d54-438e-9890-68044ea53938 container test-container: STEP: delete the pod May 11 15:14:03.579: INFO: Waiting for pod pod-8db5eecd-4d54-438e-9890-68044ea53938 to disappear May 11 15:14:03.617: INFO: Pod pod-8db5eecd-4d54-438e-9890-68044ea53938 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:14:03.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6111" for this suite. • [SLOW TEST:7.632 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:14:03.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 15:14:19.739: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:14:19.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6902" for this suite. • [SLOW TEST:16.552 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":81,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:14:20.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 11 15:14:21.161: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 15:14:21.523: INFO: Waiting for terminating namespaces to be deleted... May 11 15:14:21.590: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 11 15:14:21.595: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:14:21.595: INFO: Container kindnet-cni ready: true, restart count 0 May 11 15:14:21.595: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:14:21.595: INFO: Container kube-proxy ready: true, restart count 0 May 11 15:14:21.595: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 11 15:14:21.612: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:14:21.612: INFO: Container kindnet-cni ready: true, restart count 0 May 11 15:14:21.612: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 11 15:14:21.612: INFO: Container kube-bench ready: false, restart count 0 May 11 15:14:21.612: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:14:21.612: INFO: Container kube-proxy ready: true, restart count 0 May 11 15:14:21.612: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 11 15:14:21.613: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8bd874bb-2224-4c22-89e1-4b8309095105 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-8bd874bb-2224-4c22-89e1-4b8309095105 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-8bd874bb-2224-4c22-89e1-4b8309095105 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:15:12.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2759" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:52.325 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":10,"skipped":90,"failed":0} [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:15:12.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 15:15:23.685: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2881870b-5035-49dc-9fd8-679a1eaee2c2" May 11 15:15:23.685: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2881870b-5035-49dc-9fd8-679a1eaee2c2" in namespace "pods-7208" to be "terminated due to deadline exceeded" May 11 15:15:24.058: INFO: Pod "pod-update-activedeadlineseconds-2881870b-5035-49dc-9fd8-679a1eaee2c2": Phase="Running", Reason="", readiness=true. Elapsed: 372.478418ms May 11 15:15:26.230: INFO: Pod "pod-update-activedeadlineseconds-2881870b-5035-49dc-9fd8-679a1eaee2c2": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.545225141s May 11 15:15:26.230: INFO: Pod "pod-update-activedeadlineseconds-2881870b-5035-49dc-9fd8-679a1eaee2c2" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:15:26.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7208" for this suite. • [SLOW TEST:13.737 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":90,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:15:26.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 15:15:39.080: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:15:40.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6869" for this suite. • [SLOW TEST:14.339 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":94,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:15:40.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 11 15:15:41.389: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 11 15:15:43.403: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 11 15:15:46.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806942, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:15:48.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806942, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:15:50.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806942, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:15:52.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806943, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724806942, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:15:56.131: INFO: Waited 1.708947847s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:16:03.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-657" for this suite. • [SLOW TEST:22.692 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":13,"skipped":99,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:16:03.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:16:05.257: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 11 15:16:05.316: INFO: Number of nodes with available pods: 0 May 11 15:16:05.316: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 11 15:16:05.791: INFO: Number of nodes with available pods: 0 May 11 15:16:05.791: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:06.887: INFO: Number of nodes with available pods: 0 May 11 15:16:06.887: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:07.795: INFO: Number of nodes with available pods: 0 May 11 15:16:07.795: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:09.256: INFO: Number of nodes with available pods: 0 May 11 15:16:09.256: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:09.907: INFO: Number of nodes with available pods: 0 May 11 15:16:09.907: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:11.056: INFO: Number of nodes with available pods: 0 May 11 15:16:11.056: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:11.794: INFO: Number of nodes with available pods: 0 May 11 15:16:11.794: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:12.795: INFO: Number of nodes with available pods: 1 May 11 15:16:12.795: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 11 15:16:13.033: INFO: Number of nodes with available pods: 1 May 11 15:16:13.033: INFO: Number of running nodes: 0, number of available pods: 1 May 11 15:16:14.038: INFO: Number of nodes with available pods: 0 May 11 15:16:14.038: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 11 15:16:14.074: INFO: Number of nodes with available pods: 0 May 11 15:16:14.074: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:15.224: INFO: Number of nodes with available pods: 0 May 11 15:16:15.224: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:16.078: INFO: Number of nodes with available pods: 0 May 11 15:16:16.078: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:17.116: INFO: Number of nodes with available pods: 0 May 11 15:16:17.116: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:18.099: INFO: Number of nodes with available pods: 0 May 11 15:16:18.099: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:19.183: INFO: Number of nodes with available pods: 0 May 11 15:16:19.183: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:20.079: INFO: Number of nodes with available pods: 0 May 11 15:16:20.079: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:21.078: INFO: Number of nodes with available pods: 0 May 11 15:16:21.079: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:16:22.332: INFO: Number of nodes with available pods: 1 May 11 15:16:22.332: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5689, will wait for the garbage collector to delete the pods May 11 15:16:22.568: INFO: Deleting DaemonSet.extensions daemon-set took: 6.028547ms May 11 15:16:22.668: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.239618ms May 11 15:16:29.572: INFO: Number of nodes with available pods: 0 May 11 15:16:29.572: INFO: Number of running nodes: 0, number of available pods: 0 May 11 15:16:29.595: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5689/daemonsets","resourceVersion":"15260335"},"items":null} May 11 15:16:29.622: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5689/pods","resourceVersion":"15260336"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:16:29.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5689" for this suite. • [SLOW TEST:26.386 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":14,"skipped":103,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:16:29.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 11 15:16:29.792: INFO: Waiting up to 5m0s for pod "downward-api-da3619b7-fae3-4bb7-a2dc-8455f3dd329b" in namespace "downward-api-8256" to be "success or failure" May 11 15:16:29.800: INFO: Pod "downward-api-da3619b7-fae3-4bb7-a2dc-8455f3dd329b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.454805ms May 11 15:16:31.870: INFO: Pod "downward-api-da3619b7-fae3-4bb7-a2dc-8455f3dd329b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078792334s May 11 15:16:33.901: INFO: Pod "downward-api-da3619b7-fae3-4bb7-a2dc-8455f3dd329b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109166112s STEP: Saw pod success May 11 15:16:33.901: INFO: Pod "downward-api-da3619b7-fae3-4bb7-a2dc-8455f3dd329b" satisfied condition "success or failure" May 11 15:16:33.903: INFO: Trying to get logs from node jerma-worker2 pod downward-api-da3619b7-fae3-4bb7-a2dc-8455f3dd329b container dapi-container: STEP: delete the pod May 11 15:16:33.970: INFO: Waiting for pod downward-api-da3619b7-fae3-4bb7-a2dc-8455f3dd329b to disappear May 11 15:16:33.988: INFO: Pod downward-api-da3619b7-fae3-4bb7-a2dc-8455f3dd329b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:16:33.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8256" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:16:33.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 15:16:42.153: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 15:16:42.175: INFO: Pod pod-with-poststart-exec-hook still exists May 11 15:16:44.175: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 15:16:44.180: INFO: Pod pod-with-poststart-exec-hook still exists May 11 15:16:46.175: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 15:16:46.181: INFO: Pod pod-with-poststart-exec-hook still exists May 11 15:16:48.175: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 15:16:48.179: INFO: Pod pod-with-poststart-exec-hook still exists May 11 15:16:50.175: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 15:16:50.179: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:16:50.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2267" for this suite. • [SLOW TEST:16.191 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:16:50.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:16:50.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 11 15:16:50.468: INFO: stderr: "" May 11 15:16:50.468: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:16:50.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4957" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":17,"skipped":153,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:16:50.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:16:50.533: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 11 15:16:52.616: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:16:53.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5941" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":18,"skipped":155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:16:53.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 11 15:16:54.595: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix446983110/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:16:54.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-872" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":19,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:16:55.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8123.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8123.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8123.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 15:17:03.576: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:03.579: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:03.582: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:03.584: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:03.593: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:03.596: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:03.599: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:03.602: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:03.607: INFO: Lookups using dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local] May 11 15:17:08.611: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:08.614: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:08.616: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:08.619: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:08.627: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:08.630: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:08.632: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:08.635: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:08.640: INFO: Lookups using dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local] May 11 15:17:13.611: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:13.614: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:13.616: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:13.618: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:13.625: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:13.627: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:13.629: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:13.631: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:13.634: INFO: Lookups using dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local] May 11 15:17:18.623: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:18.747: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:18.751: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:18.753: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:18.764: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:18.766: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:18.770: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:18.772: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:18.974: INFO: Lookups using dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local] May 11 15:17:23.660: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:23.663: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:23.666: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:23.668: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:23.675: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:23.676: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:23.679: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:23.681: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:23.686: INFO: Lookups using dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local] May 11 15:17:28.619: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:28.622: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:28.624: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:28.626: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:28.633: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:28.635: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:28.637: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:28.639: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:28.643: INFO: Lookups using dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8123.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8123.svc.cluster.local jessie_udp@dns-test-service-2.dns-8123.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local] May 11 15:17:33.659: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local from pod dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78: the server could not find the requested resource (get pods dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78) May 11 15:17:33.664: INFO: Lookups using dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78 failed for: [jessie_tcp@dns-test-service-2.dns-8123.svc.cluster.local] May 11 15:17:38.650: INFO: DNS probes using dns-8123/dns-test-ea2b9b8d-10f4-48ed-9c94-a760628b9a78 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:17:38.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8123" for this suite. • [SLOW TEST:43.692 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":20,"skipped":228,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:17:38.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 15:17:47.697: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:17:48.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-777" for this suite. • [SLOW TEST:9.375 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":250,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:17:48.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:17:52.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8387" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":258,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:17:52.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:18:00.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8903" for this suite. • [SLOW TEST:7.124 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":23,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:18:00.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 11 15:18:00.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4556' May 11 15:18:00.939: INFO: stderr: "" May 11 15:18:00.939: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 15:18:01.943: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:18:01.943: INFO: Found 0 / 1 May 11 15:18:02.984: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:18:02.984: INFO: Found 0 / 1 May 11 15:18:03.983: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:18:03.983: INFO: Found 0 / 1 May 11 15:18:05.220: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:18:05.220: INFO: Found 1 / 1 May 11 15:18:05.220: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 11 15:18:05.224: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:18:05.224: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 15:18:05.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-wh956 --namespace=kubectl-4556 -p {"metadata":{"annotations":{"x":"y"}}}' May 11 15:18:05.713: INFO: stderr: "" May 11 15:18:05.713: INFO: stdout: "pod/agnhost-master-wh956 patched\n" STEP: checking annotations May 11 15:18:05.819: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:18:05.819: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:18:05.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4556" for this suite. • [SLOW TEST:5.775 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":24,"skipped":313,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:18:05.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5558.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5558.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5558.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5558.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5558.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5558.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 81.119.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.119.81_udp@PTR;check="$$(dig +tcp +noall +answer +search 81.119.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.119.81_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5558.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5558.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5558.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5558.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5558.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5558.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5558.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 81.119.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.119.81_udp@PTR;check="$$(dig +tcp +noall +answer +search 81.119.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.119.81_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 15:18:15.368: INFO: Unable to read wheezy_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:15.372: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:15.375: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:15.378: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:15.400: INFO: Unable to read jessie_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:15.404: INFO: Unable to read jessie_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:15.407: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:15.410: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:15.430: INFO: Lookups using dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c failed for: [wheezy_udp@dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_udp@dns-test-service.dns-5558.svc.cluster.local jessie_tcp@dns-test-service.dns-5558.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local] May 11 15:18:20.491: INFO: Unable to read wheezy_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:20.496: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:20.516: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:20.559: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:20.576: INFO: Unable to read jessie_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:20.578: INFO: Unable to read jessie_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:20.581: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:20.584: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:20.620: INFO: Lookups using dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c failed for: [wheezy_udp@dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_udp@dns-test-service.dns-5558.svc.cluster.local jessie_tcp@dns-test-service.dns-5558.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local] May 11 15:18:25.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:25.439: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:25.442: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:25.444: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:25.470: INFO: Unable to read jessie_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:25.472: INFO: Unable to read jessie_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:25.474: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:25.476: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:25.490: INFO: Lookups using dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c failed for: [wheezy_udp@dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_udp@dns-test-service.dns-5558.svc.cluster.local jessie_tcp@dns-test-service.dns-5558.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local] May 11 15:18:30.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:30.438: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:30.441: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:30.444: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:30.466: INFO: Unable to read jessie_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:30.562: INFO: Unable to read jessie_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:30.567: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:30.601: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:30.620: INFO: Lookups using dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c failed for: [wheezy_udp@dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_udp@dns-test-service.dns-5558.svc.cluster.local jessie_tcp@dns-test-service.dns-5558.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local] May 11 15:18:35.676: INFO: Unable to read wheezy_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:35.976: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:36.002: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:36.005: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:36.022: INFO: Unable to read jessie_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:36.024: INFO: Unable to read jessie_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:36.026: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:36.028: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:36.041: INFO: Lookups using dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c failed for: [wheezy_udp@dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_udp@dns-test-service.dns-5558.svc.cluster.local jessie_tcp@dns-test-service.dns-5558.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local] May 11 15:18:40.439: INFO: Unable to read wheezy_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:40.442: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:40.445: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:40.448: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:40.481: INFO: Unable to read jessie_udp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:40.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:40.487: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:40.544: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local from pod dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c: the server could not find the requested resource (get pods dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c) May 11 15:18:40.568: INFO: Lookups using dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c failed for: [wheezy_udp@dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@dns-test-service.dns-5558.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_udp@dns-test-service.dns-5558.svc.cluster.local jessie_tcp@dns-test-service.dns-5558.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5558.svc.cluster.local] May 11 15:18:45.636: INFO: DNS probes using dns-5558/dns-test-d3e32b0c-a1e0-4f00-a374-9921dc44b52c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:18:46.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5558" for this suite. • [SLOW TEST:40.741 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":25,"skipped":320,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:18:46.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 15:18:49.700: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 15:18:51.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807129, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807129, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807128, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:18:53.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807129, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807129, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807128, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:18:56.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807129, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807129, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807128, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:18:58.854: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:18:58.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:19:01.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8500" for this suite. STEP: Destroying namespace "webhook-8500-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.593 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":26,"skipped":323,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:19:02.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 15:19:03.414: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 15:19:05.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807143, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807143, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807143, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807143, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:19:07.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807143, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807143, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807143, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807143, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:19:10.460: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:19:10.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7254" for this suite. STEP: Destroying namespace "webhook-7254-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.925 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":27,"skipped":328,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:19:11.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 15:19:24.034: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 15:19:24.045: INFO: Pod pod-with-prestop-http-hook still exists May 11 15:19:26.046: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 15:19:26.049: INFO: Pod pod-with-prestop-http-hook still exists May 11 15:19:28.046: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 15:19:28.049: INFO: Pod pod-with-prestop-http-hook still exists May 11 15:19:30.046: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 15:19:30.054: INFO: Pod pod-with-prestop-http-hook still exists May 11 15:19:32.048: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 15:19:32.052: INFO: Pod pod-with-prestop-http-hook still exists May 11 15:19:34.046: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 15:19:34.050: INFO: Pod pod-with-prestop-http-hook still exists May 11 15:19:36.046: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 15:19:36.632: INFO: Pod pod-with-prestop-http-hook still exists May 11 15:19:38.046: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 15:19:38.077: INFO: Pod pod-with-prestop-http-hook still exists May 11 15:19:40.046: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 15:19:40.050: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:19:40.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8652" for this suite. • [SLOW TEST:28.986 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":338,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:19:40.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 11 15:19:40.915: INFO: Pod name wrapped-volume-race-ef7e0296-d5aa-47b1-820b-38b30fe0c874: Found 0 pods out of 5 May 11 15:19:45.954: INFO: Pod name wrapped-volume-race-ef7e0296-d5aa-47b1-820b-38b30fe0c874: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ef7e0296-d5aa-47b1-820b-38b30fe0c874 in namespace emptydir-wrapper-7290, will wait for the garbage collector to delete the pods May 11 15:20:03.728: INFO: Deleting ReplicationController wrapped-volume-race-ef7e0296-d5aa-47b1-820b-38b30fe0c874 took: 6.534174ms May 11 15:20:04.128: INFO: Terminating ReplicationController wrapped-volume-race-ef7e0296-d5aa-47b1-820b-38b30fe0c874 pods took: 400.454787ms STEP: Creating RC which spawns configmap-volume pods May 11 15:20:20.851: INFO: Pod name wrapped-volume-race-72b1afe6-e39b-434f-9713-aad984d31571: Found 0 pods out of 5 May 11 15:20:25.871: INFO: Pod name wrapped-volume-race-72b1afe6-e39b-434f-9713-aad984d31571: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-72b1afe6-e39b-434f-9713-aad984d31571 in namespace emptydir-wrapper-7290, will wait for the garbage collector to delete the pods May 11 15:20:43.984: INFO: Deleting ReplicationController wrapped-volume-race-72b1afe6-e39b-434f-9713-aad984d31571 took: 5.802233ms May 11 15:20:44.384: INFO: Terminating ReplicationController wrapped-volume-race-72b1afe6-e39b-434f-9713-aad984d31571 pods took: 400.229041ms STEP: Creating RC which spawns configmap-volume pods May 11 15:21:00.668: INFO: Pod name wrapped-volume-race-dc642233-a412-4cec-a758-c2f290cdf1fd: Found 0 pods out of 5 May 11 15:21:05.788: INFO: Pod name wrapped-volume-race-dc642233-a412-4cec-a758-c2f290cdf1fd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-dc642233-a412-4cec-a758-c2f290cdf1fd in namespace emptydir-wrapper-7290, will wait for the garbage collector to delete the pods May 11 15:21:23.726: INFO: Deleting ReplicationController wrapped-volume-race-dc642233-a412-4cec-a758-c2f290cdf1fd took: 1.216123625s May 11 15:21:24.526: INFO: Terminating ReplicationController wrapped-volume-race-dc642233-a412-4cec-a758-c2f290cdf1fd pods took: 800.235077ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:21:48.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7290" for this suite. • [SLOW TEST:128.110 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":29,"skipped":348,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:21:48.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-b5850c04-af4f-4718-9925-4f14211c4c9e STEP: Creating configMap with name cm-test-opt-upd-c9db8269-7223-420f-878a-6d12dd7ef1ae STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b5850c04-af4f-4718-9925-4f14211c4c9e STEP: Updating configmap cm-test-opt-upd-c9db8269-7223-420f-878a-6d12dd7ef1ae STEP: Creating configMap with name cm-test-opt-create-db96864a-2aea-4781-9b16-43c1da820b52 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:23:25.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1342" for this suite. • [SLOW TEST:97.716 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":364,"failed":0} SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:23:25.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1342, will wait for the garbage collector to delete the pods May 11 15:23:38.245: INFO: Deleting Job.batch foo took: 6.740066ms May 11 15:23:38.945: INFO: Terminating Job.batch foo pods took: 700.258053ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:24:19.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1342" for this suite. • [SLOW TEST:54.129 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":31,"skipped":366,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:24:20.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-595.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-595.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-595.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 15:24:31.521: INFO: DNS probes using dns-test-f2698c02-a51b-4af2-aba0-1db5d8067d3a succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-595.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-595.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-595.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 15:24:42.257: INFO: File wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local from pod dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 15:24:42.260: INFO: File jessie_udp@dns-test-service-3.dns-595.svc.cluster.local from pod dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 15:24:42.260: INFO: Lookups using dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 failed for: [wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local jessie_udp@dns-test-service-3.dns-595.svc.cluster.local] May 11 15:24:47.266: INFO: File wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local from pod dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 15:24:47.270: INFO: File jessie_udp@dns-test-service-3.dns-595.svc.cluster.local from pod dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 15:24:47.270: INFO: Lookups using dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 failed for: [wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local jessie_udp@dns-test-service-3.dns-595.svc.cluster.local] May 11 15:24:52.266: INFO: File wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local from pod dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 contains '' instead of 'bar.example.com.' May 11 15:24:52.270: INFO: File jessie_udp@dns-test-service-3.dns-595.svc.cluster.local from pod dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 15:24:52.270: INFO: Lookups using dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 failed for: [wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local jessie_udp@dns-test-service-3.dns-595.svc.cluster.local] May 11 15:24:57.747: INFO: File wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local from pod dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 15:24:57.788: INFO: File jessie_udp@dns-test-service-3.dns-595.svc.cluster.local from pod dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 15:24:57.788: INFO: Lookups using dns-595/dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 failed for: [wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local jessie_udp@dns-test-service-3.dns-595.svc.cluster.local] May 11 15:25:02.267: INFO: DNS probes using dns-test-9752fc88-64a4-442c-8d1a-3eb503ba3e96 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-595.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-595.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-595.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-595.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 15:25:10.824: INFO: DNS probes using dns-test-7c722ee9-f306-4f07-a405-52657d0869e9 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:25:10.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-595" for this suite. • [SLOW TEST:50.894 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":32,"skipped":380,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:25:10.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:25:10.972: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:25:12.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7681" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":33,"skipped":385,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:25:12.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-8ncm STEP: Creating a pod to test atomic-volume-subpath May 11 15:25:12.864: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8ncm" in namespace "subpath-3734" to be "success or failure" May 11 15:25:12.868: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44122ms May 11 15:25:14.963: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098750929s May 11 15:25:16.967: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 4.103183816s May 11 15:25:18.970: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 6.106357754s May 11 15:25:20.974: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 8.110414322s May 11 15:25:22.979: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 10.115092962s May 11 15:25:24.983: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 12.118721294s May 11 15:25:26.986: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 14.122409906s May 11 15:25:28.990: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 16.125841989s May 11 15:25:31.112: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 18.248318276s May 11 15:25:33.115: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 20.251014057s May 11 15:25:35.119: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 22.254516044s May 11 15:25:37.122: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Running", Reason="", readiness=true. Elapsed: 24.258397362s May 11 15:25:39.127: INFO: Pod "pod-subpath-test-configmap-8ncm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.262937919s STEP: Saw pod success May 11 15:25:39.127: INFO: Pod "pod-subpath-test-configmap-8ncm" satisfied condition "success or failure" May 11 15:25:39.130: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-8ncm container test-container-subpath-configmap-8ncm: STEP: delete the pod May 11 15:25:39.215: INFO: Waiting for pod pod-subpath-test-configmap-8ncm to disappear May 11 15:25:39.218: INFO: Pod pod-subpath-test-configmap-8ncm no longer exists STEP: Deleting pod pod-subpath-test-configmap-8ncm May 11 15:25:39.218: INFO: Deleting pod "pod-subpath-test-configmap-8ncm" in namespace "subpath-3734" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:25:39.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3734" for this suite. • [SLOW TEST:26.553 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":34,"skipped":399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:25:39.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-816.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-816.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 15:25:47.394: INFO: DNS probes using dns-816/dns-test-330ed0a5-45d1-47b5-823f-25f7fc316510 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:25:47.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-816" for this suite. • [SLOW TEST:8.446 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":35,"skipped":441,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:25:47.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 15:25:48.330: INFO: Waiting up to 5m0s for pod "pod-ba461ba7-8c65-4159-8bdc-071124b32501" in namespace "emptydir-549" to be "success or failure" May 11 15:25:48.387: INFO: Pod "pod-ba461ba7-8c65-4159-8bdc-071124b32501": Phase="Pending", Reason="", readiness=false. Elapsed: 56.791959ms May 11 15:25:50.402: INFO: Pod "pod-ba461ba7-8c65-4159-8bdc-071124b32501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071611023s May 11 15:25:52.406: INFO: Pod "pod-ba461ba7-8c65-4159-8bdc-071124b32501": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075806369s May 11 15:25:54.409: INFO: Pod "pod-ba461ba7-8c65-4159-8bdc-071124b32501": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078804579s STEP: Saw pod success May 11 15:25:54.409: INFO: Pod "pod-ba461ba7-8c65-4159-8bdc-071124b32501" satisfied condition "success or failure" May 11 15:25:54.412: INFO: Trying to get logs from node jerma-worker2 pod pod-ba461ba7-8c65-4159-8bdc-071124b32501 container test-container: STEP: delete the pod May 11 15:25:54.462: INFO: Waiting for pod pod-ba461ba7-8c65-4159-8bdc-071124b32501 to disappear May 11 15:25:54.517: INFO: Pod pod-ba461ba7-8c65-4159-8bdc-071124b32501 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:25:54.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-549" for this suite. • [SLOW TEST:6.849 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":455,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:25:54.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-ff2e5445-6c73-4f8c-8bce-769cac765f55 STEP: Creating a pod to test consume secrets May 11 15:25:54.599: INFO: Waiting up to 5m0s for pod "pod-secrets-238ceff6-ffc0-432b-a099-0e2eb41388c2" in namespace "secrets-1826" to be "success or failure" May 11 15:25:54.603: INFO: Pod "pod-secrets-238ceff6-ffc0-432b-a099-0e2eb41388c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.451698ms May 11 15:25:56.627: INFO: Pod "pod-secrets-238ceff6-ffc0-432b-a099-0e2eb41388c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028224055s May 11 15:25:58.631: INFO: Pod "pod-secrets-238ceff6-ffc0-432b-a099-0e2eb41388c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031856023s STEP: Saw pod success May 11 15:25:58.631: INFO: Pod "pod-secrets-238ceff6-ffc0-432b-a099-0e2eb41388c2" satisfied condition "success or failure" May 11 15:25:58.634: INFO: Trying to get logs from node jerma-worker pod pod-secrets-238ceff6-ffc0-432b-a099-0e2eb41388c2 container secret-volume-test: STEP: delete the pod May 11 15:25:58.840: INFO: Waiting for pod pod-secrets-238ceff6-ffc0-432b-a099-0e2eb41388c2 to disappear May 11 15:25:58.879: INFO: Pod pod-secrets-238ceff6-ffc0-432b-a099-0e2eb41388c2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:25:58.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1826" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:25:58.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 15:26:00.330: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 15:26:02.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807560, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807560, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807560, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807560, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:26:05.386: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:26:05.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9851" for this suite. STEP: Destroying namespace "webhook-9851-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.940 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":38,"skipped":493,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:26:06.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 15:26:08.762: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 15:26:10.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807568, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807568, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807568, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807568, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:26:14.252: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:26:26.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2846" for this suite. STEP: Destroying namespace "webhook-2846-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.365 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":39,"skipped":497,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:26:27.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:26:27.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7600" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":40,"skipped":512,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:26:27.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1211 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 15:26:27.499: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 15:26:57.782: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.188 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1211 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 15:26:57.782: INFO: >>> kubeConfig: /root/.kube/config I0511 15:26:57.820094 6 log.go:172] (0xc001936840) (0xc0029890e0) Create stream I0511 15:26:57.820129 6 log.go:172] (0xc001936840) (0xc0029890e0) Stream added, broadcasting: 1 I0511 15:26:57.822941 6 log.go:172] (0xc001936840) Reply frame received for 1 I0511 15:26:57.822982 6 log.go:172] (0xc001936840) (0xc001f23b80) Create stream I0511 15:26:57.822994 6 log.go:172] (0xc001936840) (0xc001f23b80) Stream added, broadcasting: 3 I0511 15:26:57.824015 6 log.go:172] (0xc001936840) Reply frame received for 3 I0511 15:26:57.824066 6 log.go:172] (0xc001936840) (0xc001e68000) Create stream I0511 15:26:57.824077 6 log.go:172] (0xc001936840) (0xc001e68000) Stream added, broadcasting: 5 I0511 15:26:57.825044 6 log.go:172] (0xc001936840) Reply frame received for 5 I0511 15:26:58.899446 6 log.go:172] (0xc001936840) Data frame received for 5 I0511 15:26:58.899510 6 log.go:172] (0xc001e68000) (5) Data frame handling I0511 15:26:58.899553 6 log.go:172] (0xc001936840) Data frame received for 3 I0511 15:26:58.899600 6 log.go:172] (0xc001f23b80) (3) Data frame handling I0511 15:26:58.899636 6 log.go:172] (0xc001f23b80) (3) Data frame sent I0511 15:26:58.899659 6 log.go:172] (0xc001936840) Data frame received for 3 I0511 15:26:58.899678 6 log.go:172] (0xc001f23b80) (3) Data frame handling I0511 15:26:58.902113 6 log.go:172] (0xc001936840) Data frame received for 1 I0511 15:26:58.902158 6 log.go:172] (0xc0029890e0) (1) Data frame handling I0511 15:26:58.902174 6 log.go:172] (0xc0029890e0) (1) Data frame sent I0511 15:26:58.902199 6 log.go:172] (0xc001936840) (0xc0029890e0) Stream removed, broadcasting: 1 I0511 15:26:58.902233 6 log.go:172] (0xc001936840) Go away received I0511 15:26:58.902508 6 log.go:172] (0xc001936840) (0xc0029890e0) Stream removed, broadcasting: 1 I0511 15:26:58.902529 6 log.go:172] (0xc001936840) (0xc001f23b80) Stream removed, broadcasting: 3 I0511 15:26:58.902537 6 log.go:172] (0xc001936840) (0xc001e68000) Stream removed, broadcasting: 5 May 11 15:26:58.902: INFO: Found all expected endpoints: [netserver-0] May 11 15:26:58.906: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.112 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1211 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 15:26:58.906: INFO: >>> kubeConfig: /root/.kube/config I0511 15:26:58.932887 6 log.go:172] (0xc0022a5600) (0xc002711f40) Create stream I0511 15:26:58.932932 6 log.go:172] (0xc0022a5600) (0xc002711f40) Stream added, broadcasting: 1 I0511 15:26:58.935904 6 log.go:172] (0xc0022a5600) Reply frame received for 1 I0511 15:26:58.935957 6 log.go:172] (0xc0022a5600) (0xc001f23c20) Create stream I0511 15:26:58.935980 6 log.go:172] (0xc0022a5600) (0xc001f23c20) Stream added, broadcasting: 3 I0511 15:26:58.936952 6 log.go:172] (0xc0022a5600) Reply frame received for 3 I0511 15:26:58.936999 6 log.go:172] (0xc0022a5600) (0xc002989180) Create stream I0511 15:26:58.937014 6 log.go:172] (0xc0022a5600) (0xc002989180) Stream added, broadcasting: 5 I0511 15:26:58.938280 6 log.go:172] (0xc0022a5600) Reply frame received for 5 I0511 15:26:59.999350 6 log.go:172] (0xc0022a5600) Data frame received for 3 I0511 15:26:59.999400 6 log.go:172] (0xc001f23c20) (3) Data frame handling I0511 15:26:59.999444 6 log.go:172] (0xc001f23c20) (3) Data frame sent I0511 15:26:59.999547 6 log.go:172] (0xc0022a5600) Data frame received for 5 I0511 15:26:59.999567 6 log.go:172] (0xc002989180) (5) Data frame handling I0511 15:26:59.999594 6 log.go:172] (0xc0022a5600) Data frame received for 3 I0511 15:26:59.999618 6 log.go:172] (0xc001f23c20) (3) Data frame handling I0511 15:27:00.002398 6 log.go:172] (0xc0022a5600) Data frame received for 1 I0511 15:27:00.002459 6 log.go:172] (0xc002711f40) (1) Data frame handling I0511 15:27:00.002486 6 log.go:172] (0xc002711f40) (1) Data frame sent I0511 15:27:00.002506 6 log.go:172] (0xc0022a5600) (0xc002711f40) Stream removed, broadcasting: 1 I0511 15:27:00.002531 6 log.go:172] (0xc0022a5600) Go away received I0511 15:27:00.002679 6 log.go:172] (0xc0022a5600) (0xc002711f40) Stream removed, broadcasting: 1 I0511 15:27:00.002717 6 log.go:172] (0xc0022a5600) (0xc001f23c20) Stream removed, broadcasting: 3 I0511 15:27:00.002737 6 log.go:172] (0xc0022a5600) (0xc002989180) Stream removed, broadcasting: 5 May 11 15:27:00.002: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:27:00.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1211" for this suite. • [SLOW TEST:32.587 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:27:00.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:27:18.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4861" for this suite. • [SLOW TEST:18.182 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":42,"skipped":542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:27:18.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-7719/configmap-test-0278ceb2-4c50-4520-86be-9df347e700c1 STEP: Creating a pod to test consume configMaps May 11 15:27:18.303: INFO: Waiting up to 5m0s for pod "pod-configmaps-1aed536f-346d-4009-8852-a444ae59d6d2" in namespace "configmap-7719" to be "success or failure" May 11 15:27:18.432: INFO: Pod "pod-configmaps-1aed536f-346d-4009-8852-a444ae59d6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 129.171437ms May 11 15:27:20.437: INFO: Pod "pod-configmaps-1aed536f-346d-4009-8852-a444ae59d6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134020082s May 11 15:27:22.441: INFO: Pod "pod-configmaps-1aed536f-346d-4009-8852-a444ae59d6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137810592s May 11 15:27:24.445: INFO: Pod "pod-configmaps-1aed536f-346d-4009-8852-a444ae59d6d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142399926s STEP: Saw pod success May 11 15:27:24.445: INFO: Pod "pod-configmaps-1aed536f-346d-4009-8852-a444ae59d6d2" satisfied condition "success or failure" May 11 15:27:24.448: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1aed536f-346d-4009-8852-a444ae59d6d2 container env-test: STEP: delete the pod May 11 15:27:24.595: INFO: Waiting for pod pod-configmaps-1aed536f-346d-4009-8852-a444ae59d6d2 to disappear May 11 15:27:24.627: INFO: Pod pod-configmaps-1aed536f-346d-4009-8852-a444ae59d6d2 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:27:24.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7719" for this suite. • [SLOW TEST:6.439 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":566,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:27:24.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:27:36.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8543" for this suite. • [SLOW TEST:11.550 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":578,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:27:36.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 11 15:27:38.121: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4243 /api/v1/namespaces/watch-4243/configmaps/e2e-watch-test-watch-closed 72685e38-2323-455e-9a5b-b9e5eccf2514 15264155 0 2020-05-11 15:27:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 15:27:38.121: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4243 /api/v1/namespaces/watch-4243/configmaps/e2e-watch-test-watch-closed 72685e38-2323-455e-9a5b-b9e5eccf2514 15264158 0 2020-05-11 15:27:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 11 15:27:38.349: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4243 /api/v1/namespaces/watch-4243/configmaps/e2e-watch-test-watch-closed 72685e38-2323-455e-9a5b-b9e5eccf2514 15264160 0 2020-05-11 15:27:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 15:27:38.349: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4243 /api/v1/namespaces/watch-4243/configmaps/e2e-watch-test-watch-closed 72685e38-2323-455e-9a5b-b9e5eccf2514 15264161 0 2020-05-11 15:27:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:27:38.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4243" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":45,"skipped":581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:27:38.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 15:27:40.158: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 15:27:42.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807659, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:27:44.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807659, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:27:46.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807659, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:27:48.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807659, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:27:51.483: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:27:53.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4266" for this suite. STEP: Destroying namespace "webhook-4266-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.581 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":46,"skipped":607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:27:54.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-f781d29f-4b82-4523-8089-369eba34c48b STEP: Creating a pod to test consume configMaps May 11 15:27:56.411: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e192bc14-1059-405b-b5f8-fe7f334048b0" in namespace "projected-7798" to be "success or failure" May 11 15:27:56.436: INFO: Pod "pod-projected-configmaps-e192bc14-1059-405b-b5f8-fe7f334048b0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.216939ms May 11 15:27:58.755: INFO: Pod "pod-projected-configmaps-e192bc14-1059-405b-b5f8-fe7f334048b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.343887005s May 11 15:28:01.331: INFO: Pod "pod-projected-configmaps-e192bc14-1059-405b-b5f8-fe7f334048b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.919482479s May 11 15:28:03.661: INFO: Pod "pod-projected-configmaps-e192bc14-1059-405b-b5f8-fe7f334048b0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.249038163s May 11 15:28:05.875: INFO: Pod "pod-projected-configmaps-e192bc14-1059-405b-b5f8-fe7f334048b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.463783683s STEP: Saw pod success May 11 15:28:05.875: INFO: Pod "pod-projected-configmaps-e192bc14-1059-405b-b5f8-fe7f334048b0" satisfied condition "success or failure" May 11 15:28:05.878: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e192bc14-1059-405b-b5f8-fe7f334048b0 container projected-configmap-volume-test: STEP: delete the pod May 11 15:28:06.470: INFO: Waiting for pod pod-projected-configmaps-e192bc14-1059-405b-b5f8-fe7f334048b0 to disappear May 11 15:28:06.535: INFO: Pod pod-projected-configmaps-e192bc14-1059-405b-b5f8-fe7f334048b0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:28:06.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7798" for this suite. • [SLOW TEST:11.978 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:28:06.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:28:25.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9397" for this suite. • [SLOW TEST:18.115 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":48,"skipped":670,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:28:25.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 15:28:25.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-675f4400-da53-4979-a18e-75fe042f8a04" in namespace "projected-9547" to be "success or failure" May 11 15:28:25.583: INFO: Pod "downwardapi-volume-675f4400-da53-4979-a18e-75fe042f8a04": Phase="Pending", Reason="", readiness=false. Elapsed: 162.053224ms May 11 15:28:27.622: INFO: Pod "downwardapi-volume-675f4400-da53-4979-a18e-75fe042f8a04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201409275s May 11 15:28:29.626: INFO: Pod "downwardapi-volume-675f4400-da53-4979-a18e-75fe042f8a04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205476537s May 11 15:28:31.774: INFO: Pod "downwardapi-volume-675f4400-da53-4979-a18e-75fe042f8a04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353555972s May 11 15:28:33.870: INFO: Pod "downwardapi-volume-675f4400-da53-4979-a18e-75fe042f8a04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.449423929s STEP: Saw pod success May 11 15:28:33.870: INFO: Pod "downwardapi-volume-675f4400-da53-4979-a18e-75fe042f8a04" satisfied condition "success or failure" May 11 15:28:33.873: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-675f4400-da53-4979-a18e-75fe042f8a04 container client-container: STEP: delete the pod May 11 15:28:34.085: INFO: Waiting for pod downwardapi-volume-675f4400-da53-4979-a18e-75fe042f8a04 to disappear May 11 15:28:34.113: INFO: Pod downwardapi-volume-675f4400-da53-4979-a18e-75fe042f8a04 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:28:34.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9547" for this suite. • [SLOW TEST:9.254 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":685,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:28:34.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 15:28:34.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6853' May 11 15:28:42.013: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 15:28:42.013: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 11 15:28:42.098: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-6rwn7] May 11 15:28:42.099: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-6rwn7" in namespace "kubectl-6853" to be "running and ready" May 11 15:28:42.135: INFO: Pod "e2e-test-httpd-rc-6rwn7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.243082ms May 11 15:28:44.344: INFO: Pod "e2e-test-httpd-rc-6rwn7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245770746s May 11 15:28:46.355: INFO: Pod "e2e-test-httpd-rc-6rwn7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256370324s May 11 15:28:48.373: INFO: Pod "e2e-test-httpd-rc-6rwn7": Phase="Running", Reason="", readiness=true. Elapsed: 6.27437603s May 11 15:28:48.373: INFO: Pod "e2e-test-httpd-rc-6rwn7" satisfied condition "running and ready" May 11 15:28:48.373: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-6rwn7] May 11 15:28:48.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6853' May 11 15:28:48.497: INFO: stderr: "" May 11 15:28:48.497: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.115. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.115. Set the 'ServerName' directive globally to suppress this message\n[Mon May 11 15:28:47.364333 2020] [mpm_event:notice] [pid 1:tid 140240522328936] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon May 11 15:28:47.364375 2020] [core:notice] [pid 1:tid 140240522328936] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 11 15:28:48.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6853' May 11 15:28:48.844: INFO: stderr: "" May 11 15:28:48.844: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:28:48.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6853" for this suite. • [SLOW TEST:14.609 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":50,"skipped":702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:28:48.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 11 15:28:50.643: INFO: Pod name pod-release: Found 0 pods out of 1 May 11 15:28:55.852: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:28:57.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9764" for this suite. • [SLOW TEST:9.061 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":51,"skipped":762,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:28:58.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 11 15:28:58.523: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 11 15:29:00.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:29:03.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:29:04.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:29:07.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:29:08.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:29:12.490: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:29:12.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:29:15.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7719" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:18.714 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":52,"skipped":764,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:29:16.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 15:29:17.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a772a91-d59c-4678-8308-dcbfe8898c94" in namespace "downward-api-833" to be "success or failure" May 11 15:29:17.390: INFO: Pod "downwardapi-volume-0a772a91-d59c-4678-8308-dcbfe8898c94": Phase="Pending", Reason="", readiness=false. Elapsed: 28.555251ms May 11 15:29:19.393: INFO: Pod "downwardapi-volume-0a772a91-d59c-4678-8308-dcbfe8898c94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031453349s May 11 15:29:21.743: INFO: Pod "downwardapi-volume-0a772a91-d59c-4678-8308-dcbfe8898c94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.38211322s May 11 15:29:23.769: INFO: Pod "downwardapi-volume-0a772a91-d59c-4678-8308-dcbfe8898c94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.407459545s May 11 15:29:25.799: INFO: Pod "downwardapi-volume-0a772a91-d59c-4678-8308-dcbfe8898c94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.437444799s STEP: Saw pod success May 11 15:29:25.799: INFO: Pod "downwardapi-volume-0a772a91-d59c-4678-8308-dcbfe8898c94" satisfied condition "success or failure" May 11 15:29:25.801: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0a772a91-d59c-4678-8308-dcbfe8898c94 container client-container: STEP: delete the pod May 11 15:29:25.962: INFO: Waiting for pod downwardapi-volume-0a772a91-d59c-4678-8308-dcbfe8898c94 to disappear May 11 15:29:25.992: INFO: Pod downwardapi-volume-0a772a91-d59c-4678-8308-dcbfe8898c94 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:29:25.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-833" for this suite. • [SLOW TEST:9.281 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":776,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:29:25.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:29:27.005: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 11 15:29:32.111: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 15:29:34.118: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 11 15:29:34.470: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2350 /apis/apps/v1/namespaces/deployment-2350/deployments/test-cleanup-deployment 66ebbdd9-d44a-4ecd-a7aa-931ca79a2294 15264777 1 2020-05-11 15:29:34 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000d6f068 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 11 15:29:34.503: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-2350 /apis/apps/v1/namespaces/deployment-2350/replicasets/test-cleanup-deployment-55ffc6b7b6 1e3de15a-0757-4242-a575-96b11d27e7af 15264780 1 2020-05-11 15:29:34 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 66ebbdd9-d44a-4ecd-a7aa-931ca79a2294 0xc0030cedf7 0xc0030cedf8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030cef88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 15:29:34.503: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 11 15:29:34.503: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2350 /apis/apps/v1/namespaces/deployment-2350/replicasets/test-cleanup-controller ed931dea-2614-41fa-8260-fbefd75ee12f 15264779 1 2020-05-11 15:29:26 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 66ebbdd9-d44a-4ecd-a7aa-931ca79a2294 0xc0030ceb27 0xc0030ceb28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0030ced68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 15:29:34.618: INFO: Pod "test-cleanup-controller-x47q4" is available: &Pod{ObjectMeta:{test-cleanup-controller-x47q4 test-cleanup-controller- deployment-2350 /api/v1/namespaces/deployment-2350/pods/test-cleanup-controller-x47q4 773056a7-b2c2-40d2-85ee-5f54d22ba334 15264772 0 2020-05-11 15:29:27 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller ed931dea-2614-41fa-8260-fbefd75ee12f 0xc001ce6827 0xc001ce6828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m8jgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m8jgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m8jgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 15:29:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 15:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 15:29:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 15:29:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.118,StartTime:2020-05-11 15:29:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 15:29:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://23a3f9160595bec57e1c148c8d12857e17b2680159818c875378be0b39ef31be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 15:29:34.618: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-hkqwq" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-hkqwq test-cleanup-deployment-55ffc6b7b6- deployment-2350 /api/v1/namespaces/deployment-2350/pods/test-cleanup-deployment-55ffc6b7b6-hkqwq 6050c9e5-b6e2-41b9-b481-e43b9ddd298b 15264784 0 2020-05-11 15:29:34 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 1e3de15a-0757-4242-a575-96b11d27e7af 0xc001ce6ec7 0xc001ce6ec8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m8jgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m8jgk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m8jgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 15:29:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:29:34.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2350" for this suite. • [SLOW TEST:9.221 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":54,"skipped":785,"failed":0} [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:29:35.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:29:36.410: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"57717613-7279-453d-a72a-527a5ab608c1", Controller:(*bool)(0xc001bcbbb2), BlockOwnerDeletion:(*bool)(0xc001bcbbb3)}} May 11 15:29:36.468: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"fe7ac9bc-c713-471a-aae8-ac8daa646876", Controller:(*bool)(0xc003139aaa), BlockOwnerDeletion:(*bool)(0xc003139aab)}} May 11 15:29:36.486: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9aa4c00d-fdb5-476a-a9ea-2b6172bf4ae0", Controller:(*bool)(0xc002349972), BlockOwnerDeletion:(*bool)(0xc002349973)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:29:41.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5174" for this suite. • [SLOW TEST:6.401 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":55,"skipped":785,"failed":0} [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:29:41.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:29:42.102: INFO: Create a RollingUpdate DaemonSet May 11 15:29:42.105: INFO: Check that daemon pods launch on every node of the cluster May 11 15:29:42.316: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:29:42.337: INFO: Number of nodes with available pods: 0 May 11 15:29:42.337: INFO: Node jerma-worker is running more than one daemon pod May 11 15:29:43.343: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:29:43.347: INFO: Number of nodes with available pods: 0 May 11 15:29:43.347: INFO: Node jerma-worker is running more than one daemon pod May 11 15:29:44.342: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:29:44.346: INFO: Number of nodes with available pods: 0 May 11 15:29:44.346: INFO: Node jerma-worker is running more than one daemon pod May 11 15:29:45.374: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:29:45.376: INFO: Number of nodes with available pods: 0 May 11 15:29:45.376: INFO: Node jerma-worker is running more than one daemon pod May 11 15:29:46.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:29:46.384: INFO: Number of nodes with available pods: 0 May 11 15:29:46.384: INFO: Node jerma-worker is running more than one daemon pod May 11 15:29:47.341: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:29:47.446: INFO: Number of nodes with available pods: 2 May 11 15:29:47.446: INFO: Number of running nodes: 2, number of available pods: 2 May 11 15:29:47.446: INFO: Update the DaemonSet to trigger a rollout May 11 15:29:47.487: INFO: Updating DaemonSet daemon-set May 11 15:29:52.590: INFO: Roll back the DaemonSet before rollout is complete May 11 15:29:52.597: INFO: Updating DaemonSet daemon-set May 11 15:29:52.597: INFO: Make sure DaemonSet rollback is complete May 11 15:29:52.623: INFO: Wrong image for pod: daemon-set-xsbnf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 15:29:52.623: INFO: Pod daemon-set-xsbnf is not available May 11 15:29:52.746: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:29:53.758: INFO: Wrong image for pod: daemon-set-xsbnf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 15:29:53.758: INFO: Pod daemon-set-xsbnf is not available May 11 15:29:53.763: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:29:54.751: INFO: Wrong image for pod: daemon-set-xsbnf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 15:29:54.751: INFO: Pod daemon-set-xsbnf is not available May 11 15:29:54.754: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:29:55.749: INFO: Wrong image for pod: daemon-set-xsbnf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 15:29:55.749: INFO: Pod daemon-set-xsbnf is not available May 11 15:29:55.751: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:29:56.750: INFO: Pod daemon-set-7zl7n is not available May 11 15:29:56.752: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1666, will wait for the garbage collector to delete the pods May 11 15:29:56.823: INFO: Deleting DaemonSet.extensions daemon-set took: 13.878031ms May 11 15:29:57.223: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.203157ms May 11 15:30:00.859: INFO: Number of nodes with available pods: 0 May 11 15:30:00.859: INFO: Number of running nodes: 0, number of available pods: 0 May 11 15:30:00.862: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1666/daemonsets","resourceVersion":"15265011"},"items":null} May 11 15:30:00.894: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1666/pods","resourceVersion":"15265012"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:30:00.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1666" for this suite. • [SLOW TEST:19.299 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":56,"skipped":785,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:30:00.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 15:30:01.231: INFO: Waiting up to 5m0s for pod "downwardapi-volume-709bb9da-5709-4273-9733-999007512c5a" in namespace "downward-api-7421" to be "success or failure" May 11 15:30:01.250: INFO: Pod "downwardapi-volume-709bb9da-5709-4273-9733-999007512c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.731997ms May 11 15:30:03.279: INFO: Pod "downwardapi-volume-709bb9da-5709-4273-9733-999007512c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047741203s May 11 15:30:05.283: INFO: Pod "downwardapi-volume-709bb9da-5709-4273-9733-999007512c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05150883s May 11 15:30:07.287: INFO: Pod "downwardapi-volume-709bb9da-5709-4273-9733-999007512c5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055858601s STEP: Saw pod success May 11 15:30:07.287: INFO: Pod "downwardapi-volume-709bb9da-5709-4273-9733-999007512c5a" satisfied condition "success or failure" May 11 15:30:07.290: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-709bb9da-5709-4273-9733-999007512c5a container client-container: STEP: delete the pod May 11 15:30:07.402: INFO: Waiting for pod downwardapi-volume-709bb9da-5709-4273-9733-999007512c5a to disappear May 11 15:30:07.407: INFO: Pod downwardapi-volume-709bb9da-5709-4273-9733-999007512c5a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:30:07.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7421" for this suite. • [SLOW TEST:6.492 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:30:07.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:30:07.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2649" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":58,"skipped":870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:30:07.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 15:30:07.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3283' May 11 15:30:07.692: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 15:30:07.692: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 11 15:30:11.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3283' May 11 15:30:12.228: INFO: stderr: "" May 11 15:30:12.228: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:30:12.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3283" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":59,"skipped":893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:30:12.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6583/configmap-test-2ed4f28c-c7b3-4f80-ac71-c0cd75dd6a81 STEP: Creating a pod to test consume configMaps May 11 15:30:13.745: INFO: Waiting up to 5m0s for pod "pod-configmaps-a819e8ad-3add-4503-a6fd-1fad07268961" in namespace "configmap-6583" to be "success or failure" May 11 15:30:14.051: INFO: Pod "pod-configmaps-a819e8ad-3add-4503-a6fd-1fad07268961": Phase="Pending", Reason="", readiness=false. Elapsed: 306.214022ms May 11 15:30:16.262: INFO: Pod "pod-configmaps-a819e8ad-3add-4503-a6fd-1fad07268961": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516805445s May 11 15:30:18.411: INFO: Pod "pod-configmaps-a819e8ad-3add-4503-a6fd-1fad07268961": Phase="Pending", Reason="", readiness=false. Elapsed: 4.665511798s May 11 15:30:20.456: INFO: Pod "pod-configmaps-a819e8ad-3add-4503-a6fd-1fad07268961": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.711199412s STEP: Saw pod success May 11 15:30:20.456: INFO: Pod "pod-configmaps-a819e8ad-3add-4503-a6fd-1fad07268961" satisfied condition "success or failure" May 11 15:30:20.459: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-a819e8ad-3add-4503-a6fd-1fad07268961 container env-test: STEP: delete the pod May 11 15:30:20.749: INFO: Waiting for pod pod-configmaps-a819e8ad-3add-4503-a6fd-1fad07268961 to disappear May 11 15:30:20.756: INFO: Pod pod-configmaps-a819e8ad-3add-4503-a6fd-1fad07268961 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:30:20.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6583" for this suite. • [SLOW TEST:8.411 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":922,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:30:20.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:30:32.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2716" for this suite. • [SLOW TEST:11.546 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":61,"skipped":929,"failed":0} [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:30:32.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 11 15:30:43.714: INFO: 5 pods remaining May 11 15:30:43.714: INFO: 5 pods has nil DeletionTimestamp May 11 15:30:43.714: INFO: STEP: Gathering metrics W0511 15:30:48.291005 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 15:30:48.291: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:30:48.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7645" for this suite. • [SLOW TEST:15.988 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":62,"skipped":929,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:30:48.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-6d9fe8a1-f45b-41c8-ad86-11a4179e2f41 STEP: Creating a pod to test consume configMaps May 11 15:30:48.395: INFO: Waiting up to 5m0s for pod "pod-configmaps-43d7d641-7768-4c76-83f5-da2ee3e0b146" in namespace "configmap-1183" to be "success or failure" May 11 15:30:48.402: INFO: Pod "pod-configmaps-43d7d641-7768-4c76-83f5-da2ee3e0b146": Phase="Pending", Reason="", readiness=false. Elapsed: 7.413942ms May 11 15:30:50.405: INFO: Pod "pod-configmaps-43d7d641-7768-4c76-83f5-da2ee3e0b146": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01054337s May 11 15:30:52.597: INFO: Pod "pod-configmaps-43d7d641-7768-4c76-83f5-da2ee3e0b146": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.202364504s STEP: Saw pod success May 11 15:30:52.597: INFO: Pod "pod-configmaps-43d7d641-7768-4c76-83f5-da2ee3e0b146" satisfied condition "success or failure" May 11 15:30:52.601: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-43d7d641-7768-4c76-83f5-da2ee3e0b146 container configmap-volume-test: STEP: delete the pod May 11 15:30:52.650: INFO: Waiting for pod pod-configmaps-43d7d641-7768-4c76-83f5-da2ee3e0b146 to disappear May 11 15:30:52.778: INFO: Pod pod-configmaps-43d7d641-7768-4c76-83f5-da2ee3e0b146 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:30:52.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1183" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":966,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:30:52.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 15:30:54.112: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 15:30:56.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:30:58.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:31:00.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724807854, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:31:03.180: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:31:03.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4365-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:31:05.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7930" for this suite. STEP: Destroying namespace "webhook-7930-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.511 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":64,"skipped":983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:31:05.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 11 15:31:06.145: INFO: created pod pod-service-account-defaultsa May 11 15:31:06.145: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 11 15:31:06.261: INFO: created pod pod-service-account-mountsa May 11 15:31:06.261: INFO: pod pod-service-account-mountsa service account token volume mount: true May 11 15:31:06.272: INFO: created pod pod-service-account-nomountsa May 11 15:31:06.272: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 11 15:31:06.353: INFO: created pod pod-service-account-defaultsa-mountspec May 11 15:31:06.353: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 11 15:31:06.430: INFO: created pod pod-service-account-mountsa-mountspec May 11 15:31:06.430: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 11 15:31:06.493: INFO: created pod pod-service-account-nomountsa-mountspec May 11 15:31:06.493: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 11 15:31:06.615: INFO: created pod pod-service-account-defaultsa-nomountspec May 11 15:31:06.615: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 11 15:31:06.801: INFO: created pod pod-service-account-mountsa-nomountspec May 11 15:31:06.801: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 11 15:31:06.808: INFO: created pod pod-service-account-nomountsa-nomountspec May 11 15:31:06.808: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:31:06.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-439" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":65,"skipped":1086,"failed":0} ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:31:06.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:31:21.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8043" for this suite. • [SLOW TEST:15.592 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":66,"skipped":1086,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:31:22.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 11 15:31:23.660: INFO: namespace kubectl-8290 May 11 15:31:23.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8290' May 11 15:31:24.059: INFO: stderr: "" May 11 15:31:24.059: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 15:31:25.093: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:31:25.093: INFO: Found 0 / 1 May 11 15:31:26.064: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:31:26.064: INFO: Found 0 / 1 May 11 15:31:27.069: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:31:27.069: INFO: Found 0 / 1 May 11 15:31:28.136: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:31:28.136: INFO: Found 0 / 1 May 11 15:31:29.562: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:31:29.562: INFO: Found 1 / 1 May 11 15:31:29.562: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 15:31:29.603: INFO: Selector matched 1 pods for map[app:agnhost] May 11 15:31:29.603: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 15:31:29.603: INFO: wait on agnhost-master startup in kubectl-8290 May 11 15:31:29.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-rf6c8 agnhost-master --namespace=kubectl-8290' May 11 15:31:29.830: INFO: stderr: "" May 11 15:31:29.830: INFO: stdout: "Paused\n" STEP: exposing RC May 11 15:31:29.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8290' May 11 15:31:30.225: INFO: stderr: "" May 11 15:31:30.225: INFO: stdout: "service/rm2 exposed\n" May 11 15:31:30.267: INFO: Service rm2 in namespace kubectl-8290 found. STEP: exposing service May 11 15:31:32.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8290' May 11 15:31:32.413: INFO: stderr: "" May 11 15:31:32.413: INFO: stdout: "service/rm3 exposed\n" May 11 15:31:32.471: INFO: Service rm3 in namespace kubectl-8290 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:31:34.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8290" for this suite. • [SLOW TEST:11.964 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":67,"skipped":1107,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:31:34.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 15:31:45.282: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 15:31:45.388: INFO: Pod pod-with-prestop-exec-hook still exists May 11 15:31:47.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 15:31:47.392: INFO: Pod pod-with-prestop-exec-hook still exists May 11 15:31:49.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 15:31:49.424: INFO: Pod pod-with-prestop-exec-hook still exists May 11 15:31:51.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 15:31:51.412: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:31:51.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4027" for this suite. • [SLOW TEST:16.943 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1122,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:31:51.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:31:57.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9902" for this suite. • [SLOW TEST:5.740 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":69,"skipped":1127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:31:57.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8496 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8496 STEP: Creating statefulset with conflicting port in namespace statefulset-8496 STEP: Waiting until pod test-pod will start running in namespace statefulset-8496 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8496 May 11 15:32:01.432: INFO: Observed stateful pod in namespace: statefulset-8496, name: ss-0, uid: b6b6b69a-a50d-40a6-9ee6-34a5d4344732, status phase: Pending. Waiting for statefulset controller to delete. May 11 15:32:01.564: INFO: Observed stateful pod in namespace: statefulset-8496, name: ss-0, uid: b6b6b69a-a50d-40a6-9ee6-34a5d4344732, status phase: Failed. Waiting for statefulset controller to delete. May 11 15:32:01.588: INFO: Observed stateful pod in namespace: statefulset-8496, name: ss-0, uid: b6b6b69a-a50d-40a6-9ee6-34a5d4344732, status phase: Failed. Waiting for statefulset controller to delete. May 11 15:32:01.635: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8496 STEP: Removing pod with conflicting port in namespace statefulset-8496 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8496 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 15:32:07.805: INFO: Deleting all statefulset in ns statefulset-8496 May 11 15:32:07.808: INFO: Scaling statefulset ss to 0 May 11 15:32:28.090: INFO: Waiting for statefulset status.replicas updated to 0 May 11 15:32:28.095: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:32:28.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8496" for this suite. • [SLOW TEST:31.764 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":70,"skipped":1208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:32:28.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 11 15:32:30.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7737 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 11 15:32:35.719: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0511 15:32:35.656611 713 log.go:172] (0xc0009fc580) (0xc0007541e0) Create stream\nI0511 15:32:35.656662 713 log.go:172] (0xc0009fc580) (0xc0007541e0) Stream added, broadcasting: 1\nI0511 15:32:35.659361 713 log.go:172] (0xc0009fc580) Reply frame received for 1\nI0511 15:32:35.659423 713 log.go:172] (0xc0009fc580) (0xc00087a000) Create stream\nI0511 15:32:35.659466 713 log.go:172] (0xc0009fc580) (0xc00087a000) Stream added, broadcasting: 3\nI0511 15:32:35.660509 713 log.go:172] (0xc0009fc580) Reply frame received for 3\nI0511 15:32:35.660563 713 log.go:172] (0xc0009fc580) (0xc0006aa0a0) Create stream\nI0511 15:32:35.660609 713 log.go:172] (0xc0009fc580) (0xc0006aa0a0) Stream added, broadcasting: 5\nI0511 15:32:35.661804 713 log.go:172] (0xc0009fc580) Reply frame received for 5\nI0511 15:32:35.661849 713 log.go:172] (0xc0009fc580) (0xc0006aa140) Create stream\nI0511 15:32:35.661876 713 log.go:172] (0xc0009fc580) (0xc0006aa140) Stream added, broadcasting: 7\nI0511 15:32:35.663218 713 log.go:172] (0xc0009fc580) Reply frame received for 7\nI0511 15:32:35.663409 713 log.go:172] (0xc00087a000) (3) Writing data frame\nI0511 15:32:35.663531 713 log.go:172] (0xc00087a000) (3) Writing data frame\nI0511 15:32:35.664460 713 log.go:172] (0xc0009fc580) Data frame received for 5\nI0511 15:32:35.664487 713 log.go:172] (0xc0006aa0a0) (5) Data frame handling\nI0511 15:32:35.664507 713 log.go:172] (0xc0006aa0a0) (5) Data frame sent\nI0511 15:32:35.664889 713 log.go:172] (0xc0009fc580) Data frame received for 5\nI0511 15:32:35.664902 713 log.go:172] (0xc0006aa0a0) (5) Data frame handling\nI0511 15:32:35.664916 713 log.go:172] (0xc0006aa0a0) (5) Data frame sent\nI0511 15:32:35.696379 713 log.go:172] (0xc0009fc580) Data frame received for 5\nI0511 15:32:35.696406 713 log.go:172] (0xc0006aa0a0) (5) Data frame handling\nI0511 15:32:35.696494 713 log.go:172] (0xc0009fc580) Data frame received for 7\nI0511 15:32:35.696506 713 log.go:172] (0xc0006aa140) (7) Data frame handling\nI0511 15:32:35.697425 713 log.go:172] (0xc0009fc580) Data frame received for 1\nI0511 15:32:35.697471 713 log.go:172] (0xc0009fc580) (0xc00087a000) Stream removed, broadcasting: 3\nI0511 15:32:35.697520 713 log.go:172] (0xc0007541e0) (1) Data frame handling\nI0511 15:32:35.697561 713 log.go:172] (0xc0007541e0) (1) Data frame sent\nI0511 15:32:35.697593 713 log.go:172] (0xc0009fc580) (0xc0007541e0) Stream removed, broadcasting: 1\nI0511 15:32:35.697636 713 log.go:172] (0xc0009fc580) Go away received\nI0511 15:32:35.698076 713 log.go:172] (0xc0009fc580) (0xc0007541e0) Stream removed, broadcasting: 1\nI0511 15:32:35.698124 713 log.go:172] (0xc0009fc580) (0xc00087a000) Stream removed, broadcasting: 3\nI0511 15:32:35.698139 713 log.go:172] (0xc0009fc580) (0xc0006aa0a0) Stream removed, broadcasting: 5\nI0511 15:32:35.698150 713 log.go:172] (0xc0009fc580) (0xc0006aa140) Stream removed, broadcasting: 7\n" May 11 15:32:35.719: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:32:37.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7737" for this suite. • [SLOW TEST:8.915 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":71,"skipped":1284,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:32:37.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:32:44.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1988" for this suite. • [SLOW TEST:6.967 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":72,"skipped":1291,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:32:44.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 15:32:44.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8414' May 11 15:32:45.464: INFO: stderr: "" May 11 15:32:45.464: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 11 15:32:45.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8414' May 11 15:32:47.679: INFO: stderr: "" May 11 15:32:47.679: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:32:47.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8414" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":73,"skipped":1296,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:32:47.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-a4adf4aa-d902-49c2-b1a1-6acf900f5d99 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:32:48.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9841" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":74,"skipped":1307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:32:48.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 15:32:48.134: INFO: Waiting up to 5m0s for pod "pod-2c231dcd-4ccf-471e-9a2f-5fa0f7a48da4" in namespace "emptydir-7161" to be "success or failure" May 11 15:32:48.148: INFO: Pod "pod-2c231dcd-4ccf-471e-9a2f-5fa0f7a48da4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.814425ms May 11 15:32:50.244: INFO: Pod "pod-2c231dcd-4ccf-471e-9a2f-5fa0f7a48da4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109739804s May 11 15:32:52.287: INFO: Pod "pod-2c231dcd-4ccf-471e-9a2f-5fa0f7a48da4": Phase="Running", Reason="", readiness=true. Elapsed: 4.15306782s May 11 15:32:54.291: INFO: Pod "pod-2c231dcd-4ccf-471e-9a2f-5fa0f7a48da4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157475704s STEP: Saw pod success May 11 15:32:54.291: INFO: Pod "pod-2c231dcd-4ccf-471e-9a2f-5fa0f7a48da4" satisfied condition "success or failure" May 11 15:32:54.294: INFO: Trying to get logs from node jerma-worker2 pod pod-2c231dcd-4ccf-471e-9a2f-5fa0f7a48da4 container test-container: STEP: delete the pod May 11 15:32:54.350: INFO: Waiting for pod pod-2c231dcd-4ccf-471e-9a2f-5fa0f7a48da4 to disappear May 11 15:32:54.374: INFO: Pod pod-2c231dcd-4ccf-471e-9a2f-5fa0f7a48da4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:32:54.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7161" for this suite. • [SLOW TEST:6.329 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1341,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:32:54.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 11 15:33:01.023: INFO: Successfully updated pod "labelsupdate11ce8753-33aa-4ed7-af81-c6e6efd03546" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:33:03.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-702" for this suite. • [SLOW TEST:8.781 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:33:03.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a6464f22-e60a-47b8-9aa3-1ad4721bb2c3 STEP: Creating a pod to test consume configMaps May 11 15:33:03.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-86c06408-2450-46fe-8659-a6c4b701e6f5" in namespace "configmap-1241" to be "success or failure" May 11 15:33:03.228: INFO: Pod "pod-configmaps-86c06408-2450-46fe-8659-a6c4b701e6f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.891052ms May 11 15:33:05.234: INFO: Pod "pod-configmaps-86c06408-2450-46fe-8659-a6c4b701e6f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010551389s May 11 15:33:07.251: INFO: Pod "pod-configmaps-86c06408-2450-46fe-8659-a6c4b701e6f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026874214s STEP: Saw pod success May 11 15:33:07.251: INFO: Pod "pod-configmaps-86c06408-2450-46fe-8659-a6c4b701e6f5" satisfied condition "success or failure" May 11 15:33:07.253: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-86c06408-2450-46fe-8659-a6c4b701e6f5 container configmap-volume-test: STEP: delete the pod May 11 15:33:07.696: INFO: Waiting for pod pod-configmaps-86c06408-2450-46fe-8659-a6c4b701e6f5 to disappear May 11 15:33:07.749: INFO: Pod pod-configmaps-86c06408-2450-46fe-8659-a6c4b701e6f5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:33:07.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1241" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1380,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:33:07.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 11 15:33:15.073: INFO: Successfully updated pod "adopt-release-6bqhf" STEP: Checking that the Job readopts the Pod May 11 15:33:15.073: INFO: Waiting up to 15m0s for pod "adopt-release-6bqhf" in namespace "job-419" to be "adopted" May 11 15:33:15.110: INFO: Pod "adopt-release-6bqhf": Phase="Running", Reason="", readiness=true. Elapsed: 37.191361ms May 11 15:33:17.114: INFO: Pod "adopt-release-6bqhf": Phase="Running", Reason="", readiness=true. Elapsed: 2.041301746s May 11 15:33:17.114: INFO: Pod "adopt-release-6bqhf" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 11 15:33:17.622: INFO: Successfully updated pod "adopt-release-6bqhf" STEP: Checking that the Job releases the Pod May 11 15:33:17.622: INFO: Waiting up to 15m0s for pod "adopt-release-6bqhf" in namespace "job-419" to be "released" May 11 15:33:17.653: INFO: Pod "adopt-release-6bqhf": Phase="Running", Reason="", readiness=true. Elapsed: 31.403826ms May 11 15:33:19.658: INFO: Pod "adopt-release-6bqhf": Phase="Running", Reason="", readiness=true. Elapsed: 2.035745022s May 11 15:33:19.658: INFO: Pod "adopt-release-6bqhf" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:33:19.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-419" for this suite. • [SLOW TEST:11.910 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":78,"skipped":1397,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:33:19.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 11 15:33:20.187: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 15:33:20.308: INFO: Waiting for terminating namespaces to be deleted... May 11 15:33:20.311: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 11 15:33:20.316: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:33:20.316: INFO: Container kindnet-cni ready: true, restart count 0 May 11 15:33:20.316: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:33:20.316: INFO: Container kube-proxy ready: true, restart count 0 May 11 15:33:20.316: INFO: adopt-release-lggrq from job-419 started at 2020-05-11 15:33:08 +0000 UTC (1 container statuses recorded) May 11 15:33:20.316: INFO: Container c ready: true, restart count 0 May 11 15:33:20.316: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 11 15:33:20.322: INFO: adopt-release-npl6s from job-419 started at 2020-05-11 15:33:17 +0000 UTC (1 container statuses recorded) May 11 15:33:20.322: INFO: Container c ready: false, restart count 0 May 11 15:33:20.322: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:33:20.322: INFO: Container kindnet-cni ready: true, restart count 0 May 11 15:33:20.322: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 11 15:33:20.322: INFO: Container kube-bench ready: false, restart count 0 May 11 15:33:20.322: INFO: adopt-release-6bqhf from job-419 started at 2020-05-11 15:33:08 +0000 UTC (1 container statuses recorded) May 11 15:33:20.322: INFO: Container c ready: true, restart count 0 May 11 15:33:20.322: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:33:20.322: INFO: Container kube-proxy ready: true, restart count 0 May 11 15:33:20.322: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 11 15:33:20.322: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-905c2acd-b9da-4bf8-93b6-d3bf16213c47 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-905c2acd-b9da-4bf8-93b6-d3bf16213c47 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-905c2acd-b9da-4bf8-93b6-d3bf16213c47 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:38:33.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1546" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:313.914 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":79,"skipped":1411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:38:33.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 11 15:38:33.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1887' May 11 15:38:33.984: INFO: stderr: "" May 11 15:38:33.984: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 15:38:33.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1887' May 11 15:38:34.105: INFO: stderr: "" May 11 15:38:34.105: INFO: stdout: "update-demo-nautilus-f72dj update-demo-nautilus-mdx68 " May 11 15:38:34.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f72dj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' May 11 15:38:34.191: INFO: stderr: "" May 11 15:38:34.191: INFO: stdout: "" May 11 15:38:34.191: INFO: update-demo-nautilus-f72dj is created but not running May 11 15:38:39.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1887' May 11 15:38:39.670: INFO: stderr: "" May 11 15:38:39.670: INFO: stdout: "update-demo-nautilus-f72dj update-demo-nautilus-mdx68 " May 11 15:38:39.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f72dj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' May 11 15:38:40.120: INFO: stderr: "" May 11 15:38:40.121: INFO: stdout: "true" May 11 15:38:40.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f72dj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1887' May 11 15:38:40.223: INFO: stderr: "" May 11 15:38:40.223: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 15:38:40.223: INFO: validating pod update-demo-nautilus-f72dj May 11 15:38:40.227: INFO: got data: { "image": "nautilus.jpg" } May 11 15:38:40.227: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 15:38:40.227: INFO: update-demo-nautilus-f72dj is verified up and running May 11 15:38:40.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mdx68 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' May 11 15:38:40.504: INFO: stderr: "" May 11 15:38:40.504: INFO: stdout: "true" May 11 15:38:40.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mdx68 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1887' May 11 15:38:40.591: INFO: stderr: "" May 11 15:38:40.591: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 15:38:40.591: INFO: validating pod update-demo-nautilus-mdx68 May 11 15:38:40.594: INFO: got data: { "image": "nautilus.jpg" } May 11 15:38:40.594: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 15:38:40.594: INFO: update-demo-nautilus-mdx68 is verified up and running STEP: using delete to clean up resources May 11 15:38:40.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1887' May 11 15:38:40.726: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 15:38:40.726: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 15:38:40.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1887' May 11 15:38:45.232: INFO: stderr: "No resources found in kubectl-1887 namespace.\n" May 11 15:38:45.232: INFO: stdout: "" May 11 15:38:45.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1887 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 15:38:45.330: INFO: stderr: "" May 11 15:38:45.330: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:38:45.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1887" for this suite. • [SLOW TEST:11.755 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":80,"skipped":1446,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:38:45.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 11 15:38:45.396: INFO: PodSpec: initContainers in spec.initContainers May 11 15:39:35.427: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-758cfc20-3550-4424-b8b4-2f7556e422db", GenerateName:"", Namespace:"init-container-8501", SelfLink:"/api/v1/namespaces/init-container-8501/pods/pod-init-758cfc20-3550-4424-b8b4-2f7556e422db", UID:"f6b5c9b1-c373-4e11-877b-13a980a16e0a", ResourceVersion:"15267893", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724808325, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"396117047"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vknk2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002a70000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vknk2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vknk2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vknk2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0030ce158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fd8000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0030ce390)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0030ce3b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0030ce3b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0030ce3bc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724808325, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724808325, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724808325, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724808325, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.140", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.140"}}, StartTime:(*v1.Time)(0xc0025ec060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0025ec860), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001a5a070)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://13313d18b0f3639128beddcd9ff7fe24512d84e3401831c9af07e0bb9d0fa182", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025ec880), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025ec840), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0030ce4ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:39:35.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8501" for this suite. • [SLOW TEST:50.230 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":81,"skipped":1454,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:39:35.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7738 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7738 I0511 15:39:35.926269 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7738, replica count: 2 I0511 15:39:38.976680 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 15:39:41.976858 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 15:39:41.976: INFO: Creating new exec pod May 11 15:39:49.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7738 execpodw8q88 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 11 15:39:49.368: INFO: stderr: "I0511 15:39:49.272123 1013 log.go:172] (0xc0000f5340) (0xc0008c0000) Create stream\nI0511 15:39:49.272166 1013 log.go:172] (0xc0000f5340) (0xc0008c0000) Stream added, broadcasting: 1\nI0511 15:39:49.274014 1013 log.go:172] (0xc0000f5340) Reply frame received for 1\nI0511 15:39:49.274033 1013 log.go:172] (0xc0000f5340) (0xc0005fd900) Create stream\nI0511 15:39:49.274039 1013 log.go:172] (0xc0000f5340) (0xc0005fd900) Stream added, broadcasting: 3\nI0511 15:39:49.274744 1013 log.go:172] (0xc0000f5340) Reply frame received for 3\nI0511 15:39:49.274765 1013 log.go:172] (0xc0000f5340) (0xc0008c00a0) Create stream\nI0511 15:39:49.274776 1013 log.go:172] (0xc0000f5340) (0xc0008c00a0) Stream added, broadcasting: 5\nI0511 15:39:49.275421 1013 log.go:172] (0xc0000f5340) Reply frame received for 5\nI0511 15:39:49.363378 1013 log.go:172] (0xc0000f5340) Data frame received for 5\nI0511 15:39:49.363462 1013 log.go:172] (0xc0008c00a0) (5) Data frame handling\nI0511 15:39:49.363492 1013 log.go:172] (0xc0008c00a0) (5) Data frame sent\nI0511 15:39:49.363507 1013 log.go:172] (0xc0000f5340) Data frame received for 5\nI0511 15:39:49.363522 1013 log.go:172] (0xc0008c00a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0511 15:39:49.363558 1013 log.go:172] (0xc0008c00a0) (5) Data frame sent\nI0511 15:39:49.363755 1013 log.go:172] (0xc0000f5340) Data frame received for 3\nI0511 15:39:49.363777 1013 log.go:172] (0xc0005fd900) (3) Data frame handling\nI0511 15:39:49.363803 1013 log.go:172] (0xc0000f5340) Data frame received for 5\nI0511 15:39:49.363818 1013 log.go:172] (0xc0008c00a0) (5) Data frame handling\nI0511 15:39:49.364938 1013 log.go:172] (0xc0000f5340) Data frame received for 1\nI0511 15:39:49.364954 1013 log.go:172] (0xc0008c0000) (1) Data frame handling\nI0511 15:39:49.364962 1013 log.go:172] (0xc0008c0000) (1) Data frame sent\nI0511 15:39:49.364969 1013 log.go:172] (0xc0000f5340) (0xc0008c0000) Stream removed, broadcasting: 1\nI0511 15:39:49.364977 1013 log.go:172] (0xc0000f5340) Go away received\nI0511 15:39:49.365354 1013 log.go:172] (0xc0000f5340) (0xc0008c0000) Stream removed, broadcasting: 1\nI0511 15:39:49.365367 1013 log.go:172] (0xc0000f5340) (0xc0005fd900) Stream removed, broadcasting: 3\nI0511 15:39:49.365375 1013 log.go:172] (0xc0000f5340) (0xc0008c00a0) Stream removed, broadcasting: 5\n" May 11 15:39:49.368: INFO: stdout: "" May 11 15:39:49.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7738 execpodw8q88 -- /bin/sh -x -c nc -zv -t -w 2 10.111.194.0 80' May 11 15:39:49.632: INFO: stderr: "I0511 15:39:49.564721 1033 log.go:172] (0xc000a32000) (0xc0005a74a0) Create stream\nI0511 15:39:49.564775 1033 log.go:172] (0xc000a32000) (0xc0005a74a0) Stream added, broadcasting: 1\nI0511 15:39:49.567476 1033 log.go:172] (0xc000a32000) Reply frame received for 1\nI0511 15:39:49.567508 1033 log.go:172] (0xc000a32000) (0xc0009ae000) Create stream\nI0511 15:39:49.567520 1033 log.go:172] (0xc000a32000) (0xc0009ae000) Stream added, broadcasting: 3\nI0511 15:39:49.568226 1033 log.go:172] (0xc000a32000) Reply frame received for 3\nI0511 15:39:49.568252 1033 log.go:172] (0xc000a32000) (0xc0009ae0a0) Create stream\nI0511 15:39:49.568259 1033 log.go:172] (0xc000a32000) (0xc0009ae0a0) Stream added, broadcasting: 5\nI0511 15:39:49.569038 1033 log.go:172] (0xc000a32000) Reply frame received for 5\nI0511 15:39:49.625696 1033 log.go:172] (0xc000a32000) Data frame received for 3\nI0511 15:39:49.625752 1033 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0511 15:39:49.625780 1033 log.go:172] (0xc000a32000) Data frame received for 5\nI0511 15:39:49.625799 1033 log.go:172] (0xc0009ae0a0) (5) Data frame handling\nI0511 15:39:49.625820 1033 log.go:172] (0xc0009ae0a0) (5) Data frame sent\nI0511 15:39:49.625832 1033 log.go:172] (0xc000a32000) Data frame received for 5\nI0511 15:39:49.625843 1033 log.go:172] (0xc0009ae0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.194.0 80\nConnection to 10.111.194.0 80 port [tcp/http] succeeded!\nI0511 15:39:49.627152 1033 log.go:172] (0xc000a32000) Data frame received for 1\nI0511 15:39:49.627173 1033 log.go:172] (0xc0005a74a0) (1) Data frame handling\nI0511 15:39:49.627186 1033 log.go:172] (0xc0005a74a0) (1) Data frame sent\nI0511 15:39:49.627198 1033 log.go:172] (0xc000a32000) (0xc0005a74a0) Stream removed, broadcasting: 1\nI0511 15:39:49.627209 1033 log.go:172] (0xc000a32000) Go away received\nI0511 15:39:49.627625 1033 log.go:172] (0xc000a32000) (0xc0005a74a0) Stream removed, broadcasting: 1\nI0511 15:39:49.627644 1033 log.go:172] (0xc000a32000) (0xc0009ae000) Stream removed, broadcasting: 3\nI0511 15:39:49.627654 1033 log.go:172] (0xc000a32000) (0xc0009ae0a0) Stream removed, broadcasting: 5\n" May 11 15:39:49.632: INFO: stdout: "" May 11 15:39:49.632: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:39:49.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7738" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.125 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":82,"skipped":1463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:39:49.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-269f684a-896e-4ac4-b00d-90f99de9f98b STEP: Creating a pod to test consume configMaps May 11 15:39:49.775: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ddcdfa8-dfb6-4186-ac21-22a29d8ec7f8" in namespace "configmap-2573" to be "success or failure" May 11 15:39:49.796: INFO: Pod "pod-configmaps-7ddcdfa8-dfb6-4186-ac21-22a29d8ec7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.580231ms May 11 15:39:52.330: INFO: Pod "pod-configmaps-7ddcdfa8-dfb6-4186-ac21-22a29d8ec7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.555146955s May 11 15:39:54.335: INFO: Pod "pod-configmaps-7ddcdfa8-dfb6-4186-ac21-22a29d8ec7f8": Phase="Running", Reason="", readiness=true. Elapsed: 4.560150272s May 11 15:39:56.389: INFO: Pod "pod-configmaps-7ddcdfa8-dfb6-4186-ac21-22a29d8ec7f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.613676338s STEP: Saw pod success May 11 15:39:56.389: INFO: Pod "pod-configmaps-7ddcdfa8-dfb6-4186-ac21-22a29d8ec7f8" satisfied condition "success or failure" May 11 15:39:56.391: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-7ddcdfa8-dfb6-4186-ac21-22a29d8ec7f8 container configmap-volume-test: STEP: delete the pod May 11 15:39:56.434: INFO: Waiting for pod pod-configmaps-7ddcdfa8-dfb6-4186-ac21-22a29d8ec7f8 to disappear May 11 15:39:56.438: INFO: Pod pod-configmaps-7ddcdfa8-dfb6-4186-ac21-22a29d8ec7f8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:39:56.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2573" for this suite. • [SLOW TEST:6.750 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1530,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:39:56.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-49dx STEP: Creating a pod to test atomic-volume-subpath May 11 15:39:57.188: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-49dx" in namespace "subpath-1210" to be "success or failure" May 11 15:39:57.329: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Pending", Reason="", readiness=false. Elapsed: 141.132663ms May 11 15:39:59.332: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144398156s May 11 15:40:01.336: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147620548s May 11 15:40:03.383: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Running", Reason="", readiness=true. Elapsed: 6.195568296s May 11 15:40:05.387: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Running", Reason="", readiness=true. Elapsed: 8.198676457s May 11 15:40:07.391: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Running", Reason="", readiness=true. Elapsed: 10.202897285s May 11 15:40:09.396: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Running", Reason="", readiness=true. Elapsed: 12.208309268s May 11 15:40:11.485: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Running", Reason="", readiness=true. Elapsed: 14.297499897s May 11 15:40:13.488: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Running", Reason="", readiness=true. Elapsed: 16.300381882s May 11 15:40:15.492: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Running", Reason="", readiness=true. Elapsed: 18.304062824s May 11 15:40:17.496: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Running", Reason="", readiness=true. Elapsed: 20.30819415s May 11 15:40:19.500: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Running", Reason="", readiness=true. Elapsed: 22.31211657s May 11 15:40:21.545: INFO: Pod "pod-subpath-test-projected-49dx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.357379165s STEP: Saw pod success May 11 15:40:21.545: INFO: Pod "pod-subpath-test-projected-49dx" satisfied condition "success or failure" May 11 15:40:21.548: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-49dx container test-container-subpath-projected-49dx: STEP: delete the pod May 11 15:40:21.596: INFO: Waiting for pod pod-subpath-test-projected-49dx to disappear May 11 15:40:21.609: INFO: Pod pod-subpath-test-projected-49dx no longer exists STEP: Deleting pod pod-subpath-test-projected-49dx May 11 15:40:21.609: INFO: Deleting pod "pod-subpath-test-projected-49dx" in namespace "subpath-1210" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:40:21.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1210" for this suite. • [SLOW TEST:25.174 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":84,"skipped":1538,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:40:21.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-9356b814-198a-4654-915c-579ee7fee45f in namespace container-probe-2542 May 11 15:40:25.936: INFO: Started pod busybox-9356b814-198a-4654-915c-579ee7fee45f in namespace container-probe-2542 STEP: checking the pod's current state and verifying that restartCount is present May 11 15:40:25.939: INFO: Initial restart count of pod busybox-9356b814-198a-4654-915c-579ee7fee45f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:44:26.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2542" for this suite. • [SLOW TEST:244.889 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1541,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:44:26.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 15:44:26.654: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:26.728: INFO: Number of nodes with available pods: 0 May 11 15:44:26.728: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:27.734: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:27.737: INFO: Number of nodes with available pods: 0 May 11 15:44:27.737: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:28.789: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:28.791: INFO: Number of nodes with available pods: 0 May 11 15:44:28.791: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:29.877: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:29.880: INFO: Number of nodes with available pods: 0 May 11 15:44:29.880: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:30.732: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:30.735: INFO: Number of nodes with available pods: 0 May 11 15:44:30.735: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:31.820: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:31.824: INFO: Number of nodes with available pods: 0 May 11 15:44:31.824: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:32.747: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:32.758: INFO: Number of nodes with available pods: 2 May 11 15:44:32.758: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 11 15:44:32.903: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:32.944: INFO: Number of nodes with available pods: 1 May 11 15:44:32.944: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:34.053: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:34.056: INFO: Number of nodes with available pods: 1 May 11 15:44:34.056: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:35.036: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:35.039: INFO: Number of nodes with available pods: 1 May 11 15:44:35.039: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:35.974: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:35.979: INFO: Number of nodes with available pods: 1 May 11 15:44:35.979: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:36.969: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:36.973: INFO: Number of nodes with available pods: 1 May 11 15:44:36.973: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:38.161: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:38.207: INFO: Number of nodes with available pods: 1 May 11 15:44:38.207: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:38.969: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:38.986: INFO: Number of nodes with available pods: 1 May 11 15:44:38.986: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:39.948: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:39.951: INFO: Number of nodes with available pods: 1 May 11 15:44:39.951: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:40.950: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:40.954: INFO: Number of nodes with available pods: 1 May 11 15:44:40.954: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:41.948: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:41.952: INFO: Number of nodes with available pods: 1 May 11 15:44:41.952: INFO: Node jerma-worker is running more than one daemon pod May 11 15:44:43.006: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:44:43.008: INFO: Number of nodes with available pods: 2 May 11 15:44:43.008: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2007, will wait for the garbage collector to delete the pods May 11 15:44:43.103: INFO: Deleting DaemonSet.extensions daemon-set took: 5.871435ms May 11 15:44:43.503: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.235172ms May 11 15:44:59.429: INFO: Number of nodes with available pods: 0 May 11 15:44:59.429: INFO: Number of running nodes: 0, number of available pods: 0 May 11 15:44:59.431: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2007/daemonsets","resourceVersion":"15269026"},"items":null} May 11 15:44:59.433: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2007/pods","resourceVersion":"15269026"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:44:59.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2007" for this suite. • [SLOW TEST:32.937 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":86,"skipped":1544,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:44:59.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8562 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-8562 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8562 May 11 15:44:59.730: INFO: Found 0 stateful pods, waiting for 1 May 11 15:45:09.733: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 11 15:45:09.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 15:45:10.009: INFO: stderr: "I0511 15:45:09.870696 1055 log.go:172] (0xc0001022c0) (0xc000766000) Create stream\nI0511 15:45:09.870743 1055 log.go:172] (0xc0001022c0) (0xc000766000) Stream added, broadcasting: 1\nI0511 15:45:09.874112 1055 log.go:172] (0xc0001022c0) Reply frame received for 1\nI0511 15:45:09.874148 1055 log.go:172] (0xc0001022c0) (0xc000758140) Create stream\nI0511 15:45:09.874158 1055 log.go:172] (0xc0001022c0) (0xc000758140) Stream added, broadcasting: 3\nI0511 15:45:09.874943 1055 log.go:172] (0xc0001022c0) Reply frame received for 3\nI0511 15:45:09.874979 1055 log.go:172] (0xc0001022c0) (0xc0007b0000) Create stream\nI0511 15:45:09.874996 1055 log.go:172] (0xc0001022c0) (0xc0007b0000) Stream added, broadcasting: 5\nI0511 15:45:09.875808 1055 log.go:172] (0xc0001022c0) Reply frame received for 5\nI0511 15:45:09.963563 1055 log.go:172] (0xc0001022c0) Data frame received for 5\nI0511 15:45:09.963603 1055 log.go:172] (0xc0007b0000) (5) Data frame handling\nI0511 15:45:09.963630 1055 log.go:172] (0xc0007b0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 15:45:10.002006 1055 log.go:172] (0xc0001022c0) Data frame received for 3\nI0511 15:45:10.002024 1055 log.go:172] (0xc000758140) (3) Data frame handling\nI0511 15:45:10.002040 1055 log.go:172] (0xc000758140) (3) Data frame sent\nI0511 15:45:10.002161 1055 log.go:172] (0xc0001022c0) Data frame received for 3\nI0511 15:45:10.002178 1055 log.go:172] (0xc000758140) (3) Data frame handling\nI0511 15:45:10.002466 1055 log.go:172] (0xc0001022c0) Data frame received for 5\nI0511 15:45:10.002483 1055 log.go:172] (0xc0007b0000) (5) Data frame handling\nI0511 15:45:10.004362 1055 log.go:172] (0xc0001022c0) Data frame received for 1\nI0511 15:45:10.004379 1055 log.go:172] (0xc000766000) (1) Data frame handling\nI0511 15:45:10.004407 1055 log.go:172] (0xc000766000) (1) Data frame sent\nI0511 15:45:10.004427 1055 log.go:172] (0xc0001022c0) (0xc000766000) Stream removed, broadcasting: 1\nI0511 15:45:10.004585 1055 log.go:172] (0xc0001022c0) Go away received\nI0511 15:45:10.004729 1055 log.go:172] (0xc0001022c0) (0xc000766000) Stream removed, broadcasting: 1\nI0511 15:45:10.004742 1055 log.go:172] (0xc0001022c0) (0xc000758140) Stream removed, broadcasting: 3\nI0511 15:45:10.004747 1055 log.go:172] (0xc0001022c0) (0xc0007b0000) Stream removed, broadcasting: 5\n" May 11 15:45:10.009: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 15:45:10.009: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 15:45:10.012: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 15:45:20.016: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 15:45:20.016: INFO: Waiting for statefulset status.replicas updated to 0 May 11 15:45:20.209: INFO: POD NODE PHASE GRACE CONDITIONS May 11 15:45:20.209: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC }] May 11 15:45:20.209: INFO: May 11 15:45:20.209: INFO: StatefulSet ss has not reached scale 3, at 1 May 11 15:45:21.329: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.812805646s May 11 15:45:22.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.692294695s May 11 15:45:23.607: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.418147205s May 11 15:45:24.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.414290034s May 11 15:45:25.621: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.406036175s May 11 15:45:26.658: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.401221801s May 11 15:45:27.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.36401367s May 11 15:45:28.683: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.344457871s May 11 15:45:29.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 339.040022ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8562 May 11 15:45:30.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:45:31.339: INFO: stderr: "I0511 15:45:31.198286 1075 log.go:172] (0xc000a8e0b0) (0xc0007d6000) Create stream\nI0511 15:45:31.198376 1075 log.go:172] (0xc000a8e0b0) (0xc0007d6000) Stream added, broadcasting: 1\nI0511 15:45:31.201729 1075 log.go:172] (0xc000a8e0b0) Reply frame received for 1\nI0511 15:45:31.201773 1075 log.go:172] (0xc000a8e0b0) (0xc0007d60a0) Create stream\nI0511 15:45:31.201790 1075 log.go:172] (0xc000a8e0b0) (0xc0007d60a0) Stream added, broadcasting: 3\nI0511 15:45:31.202821 1075 log.go:172] (0xc000a8e0b0) Reply frame received for 3\nI0511 15:45:31.202864 1075 log.go:172] (0xc000a8e0b0) (0xc0007d40a0) Create stream\nI0511 15:45:31.202879 1075 log.go:172] (0xc000a8e0b0) (0xc0007d40a0) Stream added, broadcasting: 5\nI0511 15:45:31.203846 1075 log.go:172] (0xc000a8e0b0) Reply frame received for 5\nI0511 15:45:31.264480 1075 log.go:172] (0xc000a8e0b0) Data frame received for 5\nI0511 15:45:31.264499 1075 log.go:172] (0xc0007d40a0) (5) Data frame handling\nI0511 15:45:31.264510 1075 log.go:172] (0xc0007d40a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 15:45:31.332364 1075 log.go:172] (0xc000a8e0b0) Data frame received for 5\nI0511 15:45:31.332402 1075 log.go:172] (0xc0007d40a0) (5) Data frame handling\nI0511 15:45:31.332423 1075 log.go:172] (0xc000a8e0b0) Data frame received for 3\nI0511 15:45:31.332431 1075 log.go:172] (0xc0007d60a0) (3) Data frame handling\nI0511 15:45:31.332440 1075 log.go:172] (0xc0007d60a0) (3) Data frame sent\nI0511 15:45:31.332448 1075 log.go:172] (0xc000a8e0b0) Data frame received for 3\nI0511 15:45:31.332455 1075 log.go:172] (0xc0007d60a0) (3) Data frame handling\nI0511 15:45:31.334612 1075 log.go:172] (0xc000a8e0b0) Data frame received for 1\nI0511 15:45:31.334641 1075 log.go:172] (0xc0007d6000) (1) Data frame handling\nI0511 15:45:31.334660 1075 log.go:172] (0xc0007d6000) (1) Data frame sent\nI0511 15:45:31.334674 1075 log.go:172] (0xc000a8e0b0) (0xc0007d6000) Stream removed, broadcasting: 1\nI0511 15:45:31.334796 1075 log.go:172] (0xc000a8e0b0) Go away received\nI0511 15:45:31.335083 1075 log.go:172] (0xc000a8e0b0) (0xc0007d6000) Stream removed, broadcasting: 1\nI0511 15:45:31.335101 1075 log.go:172] (0xc000a8e0b0) (0xc0007d60a0) Stream removed, broadcasting: 3\nI0511 15:45:31.335112 1075 log.go:172] (0xc000a8e0b0) (0xc0007d40a0) Stream removed, broadcasting: 5\n" May 11 15:45:31.339: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 15:45:31.339: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 15:45:31.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:45:31.935: INFO: stderr: "I0511 15:45:31.874598 1095 log.go:172] (0xc00088a000) (0xc00098e640) Create stream\nI0511 15:45:31.874685 1095 log.go:172] (0xc00088a000) (0xc00098e640) Stream added, broadcasting: 1\nI0511 15:45:31.878597 1095 log.go:172] (0xc00088a000) Reply frame received for 1\nI0511 15:45:31.878640 1095 log.go:172] (0xc00088a000) (0xc000662640) Create stream\nI0511 15:45:31.878651 1095 log.go:172] (0xc00088a000) (0xc000662640) Stream added, broadcasting: 3\nI0511 15:45:31.879430 1095 log.go:172] (0xc00088a000) Reply frame received for 3\nI0511 15:45:31.879469 1095 log.go:172] (0xc00088a000) (0xc00036d400) Create stream\nI0511 15:45:31.879481 1095 log.go:172] (0xc00088a000) (0xc00036d400) Stream added, broadcasting: 5\nI0511 15:45:31.880200 1095 log.go:172] (0xc00088a000) Reply frame received for 5\nI0511 15:45:31.928390 1095 log.go:172] (0xc00088a000) Data frame received for 5\nI0511 15:45:31.928438 1095 log.go:172] (0xc00036d400) (5) Data frame handling\nI0511 15:45:31.928449 1095 log.go:172] (0xc00036d400) (5) Data frame sent\nI0511 15:45:31.928458 1095 log.go:172] (0xc00088a000) Data frame received for 5\nI0511 15:45:31.928465 1095 log.go:172] (0xc00036d400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 15:45:31.928488 1095 log.go:172] (0xc00088a000) Data frame received for 3\nI0511 15:45:31.928496 1095 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 15:45:31.928505 1095 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 15:45:31.928527 1095 log.go:172] (0xc00088a000) Data frame received for 3\nI0511 15:45:31.928539 1095 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 15:45:31.929745 1095 log.go:172] (0xc00088a000) Data frame received for 1\nI0511 15:45:31.929782 1095 log.go:172] (0xc00098e640) (1) Data frame handling\nI0511 15:45:31.929796 1095 log.go:172] (0xc00098e640) (1) Data frame sent\nI0511 15:45:31.929811 1095 log.go:172] (0xc00088a000) (0xc00098e640) Stream removed, broadcasting: 1\nI0511 15:45:31.929830 1095 log.go:172] (0xc00088a000) Go away received\nI0511 15:45:31.930399 1095 log.go:172] (0xc00088a000) (0xc00098e640) Stream removed, broadcasting: 1\nI0511 15:45:31.930425 1095 log.go:172] (0xc00088a000) (0xc000662640) Stream removed, broadcasting: 3\nI0511 15:45:31.930439 1095 log.go:172] (0xc00088a000) (0xc00036d400) Stream removed, broadcasting: 5\n" May 11 15:45:31.936: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 15:45:31.936: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 15:45:31.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:45:32.692: INFO: stderr: "I0511 15:45:32.617244 1115 log.go:172] (0xc0001191e0) (0xc000737680) Create stream\nI0511 15:45:32.617300 1115 log.go:172] (0xc0001191e0) (0xc000737680) Stream added, broadcasting: 1\nI0511 15:45:32.619411 1115 log.go:172] (0xc0001191e0) Reply frame received for 1\nI0511 15:45:32.619453 1115 log.go:172] (0xc0001191e0) (0xc0008a8000) Create stream\nI0511 15:45:32.619465 1115 log.go:172] (0xc0001191e0) (0xc0008a8000) Stream added, broadcasting: 3\nI0511 15:45:32.620274 1115 log.go:172] (0xc0001191e0) Reply frame received for 3\nI0511 15:45:32.620297 1115 log.go:172] (0xc0001191e0) (0xc0008a80a0) Create stream\nI0511 15:45:32.620304 1115 log.go:172] (0xc0001191e0) (0xc0008a80a0) Stream added, broadcasting: 5\nI0511 15:45:32.621077 1115 log.go:172] (0xc0001191e0) Reply frame received for 5\nI0511 15:45:32.685479 1115 log.go:172] (0xc0001191e0) Data frame received for 3\nI0511 15:45:32.685519 1115 log.go:172] (0xc0008a8000) (3) Data frame handling\nI0511 15:45:32.685541 1115 log.go:172] (0xc0008a8000) (3) Data frame sent\nI0511 15:45:32.685555 1115 log.go:172] (0xc0001191e0) Data frame received for 3\nI0511 15:45:32.685564 1115 log.go:172] (0xc0008a8000) (3) Data frame handling\nI0511 15:45:32.685608 1115 log.go:172] (0xc0001191e0) Data frame received for 5\nI0511 15:45:32.685638 1115 log.go:172] (0xc0008a80a0) (5) Data frame handling\nI0511 15:45:32.685674 1115 log.go:172] (0xc0008a80a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 15:45:32.685848 1115 log.go:172] (0xc0001191e0) Data frame received for 5\nI0511 15:45:32.685864 1115 log.go:172] (0xc0008a80a0) (5) Data frame handling\nI0511 15:45:32.687525 1115 log.go:172] (0xc0001191e0) Data frame received for 1\nI0511 15:45:32.687558 1115 log.go:172] (0xc000737680) (1) Data frame handling\nI0511 15:45:32.687585 1115 log.go:172] (0xc000737680) (1) Data frame sent\nI0511 15:45:32.687613 1115 log.go:172] (0xc0001191e0) (0xc000737680) Stream removed, broadcasting: 1\nI0511 15:45:32.687648 1115 log.go:172] (0xc0001191e0) Go away received\nI0511 15:45:32.687961 1115 log.go:172] (0xc0001191e0) (0xc000737680) Stream removed, broadcasting: 1\nI0511 15:45:32.687979 1115 log.go:172] (0xc0001191e0) (0xc0008a8000) Stream removed, broadcasting: 3\nI0511 15:45:32.687987 1115 log.go:172] (0xc0001191e0) (0xc0008a80a0) Stream removed, broadcasting: 5\n" May 11 15:45:32.693: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 15:45:32.693: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 15:45:32.922: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 15:45:32.922: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 15:45:32.922: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 11 15:45:32.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 15:45:33.526: INFO: stderr: "I0511 15:45:33.429478 1136 log.go:172] (0xc000104dc0) (0xc000bbc000) Create stream\nI0511 15:45:33.429532 1136 log.go:172] (0xc000104dc0) (0xc000bbc000) Stream added, broadcasting: 1\nI0511 15:45:33.432285 1136 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0511 15:45:33.432330 1136 log.go:172] (0xc000104dc0) (0xc000bbc0a0) Create stream\nI0511 15:45:33.432344 1136 log.go:172] (0xc000104dc0) (0xc000bbc0a0) Stream added, broadcasting: 3\nI0511 15:45:33.433668 1136 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0511 15:45:33.433706 1136 log.go:172] (0xc000104dc0) (0xc000bbc140) Create stream\nI0511 15:45:33.433726 1136 log.go:172] (0xc000104dc0) (0xc000bbc140) Stream added, broadcasting: 5\nI0511 15:45:33.434665 1136 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0511 15:45:33.493678 1136 log.go:172] (0xc000104dc0) Data frame received for 5\nI0511 15:45:33.493699 1136 log.go:172] (0xc000bbc140) (5) Data frame handling\nI0511 15:45:33.493728 1136 log.go:172] (0xc000bbc140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 15:45:33.519554 1136 log.go:172] (0xc000104dc0) Data frame received for 5\nI0511 15:45:33.519611 1136 log.go:172] (0xc000bbc140) (5) Data frame handling\nI0511 15:45:33.519635 1136 log.go:172] (0xc000104dc0) Data frame received for 3\nI0511 15:45:33.519646 1136 log.go:172] (0xc000bbc0a0) (3) Data frame handling\nI0511 15:45:33.519657 1136 log.go:172] (0xc000bbc0a0) (3) Data frame sent\nI0511 15:45:33.519672 1136 log.go:172] (0xc000104dc0) Data frame received for 3\nI0511 15:45:33.519684 1136 log.go:172] (0xc000bbc0a0) (3) Data frame handling\nI0511 15:45:33.520914 1136 log.go:172] (0xc000104dc0) Data frame received for 1\nI0511 15:45:33.520954 1136 log.go:172] (0xc000bbc000) (1) Data frame handling\nI0511 15:45:33.520973 1136 log.go:172] (0xc000bbc000) (1) Data frame sent\nI0511 15:45:33.520997 1136 log.go:172] (0xc000104dc0) (0xc000bbc000) Stream removed, broadcasting: 1\nI0511 15:45:33.521037 1136 log.go:172] (0xc000104dc0) Go away received\nI0511 15:45:33.521541 1136 log.go:172] (0xc000104dc0) (0xc000bbc000) Stream removed, broadcasting: 1\nI0511 15:45:33.521560 1136 log.go:172] (0xc000104dc0) (0xc000bbc0a0) Stream removed, broadcasting: 3\nI0511 15:45:33.521568 1136 log.go:172] (0xc000104dc0) (0xc000bbc140) Stream removed, broadcasting: 5\n" May 11 15:45:33.526: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 15:45:33.526: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 15:45:33.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 15:45:34.140: INFO: stderr: "I0511 15:45:33.645229 1156 log.go:172] (0xc000538f20) (0xc0006b3a40) Create stream\nI0511 15:45:33.645298 1156 log.go:172] (0xc000538f20) (0xc0006b3a40) Stream added, broadcasting: 1\nI0511 15:45:33.647756 1156 log.go:172] (0xc000538f20) Reply frame received for 1\nI0511 15:45:33.647791 1156 log.go:172] (0xc000538f20) (0xc000738000) Create stream\nI0511 15:45:33.647801 1156 log.go:172] (0xc000538f20) (0xc000738000) Stream added, broadcasting: 3\nI0511 15:45:33.648630 1156 log.go:172] (0xc000538f20) Reply frame received for 3\nI0511 15:45:33.648675 1156 log.go:172] (0xc000538f20) (0xc0006b3c20) Create stream\nI0511 15:45:33.648691 1156 log.go:172] (0xc000538f20) (0xc0006b3c20) Stream added, broadcasting: 5\nI0511 15:45:33.649565 1156 log.go:172] (0xc000538f20) Reply frame received for 5\nI0511 15:45:33.697077 1156 log.go:172] (0xc000538f20) Data frame received for 5\nI0511 15:45:33.697289 1156 log.go:172] (0xc0006b3c20) (5) Data frame handling\nI0511 15:45:33.697317 1156 log.go:172] (0xc0006b3c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 15:45:34.134469 1156 log.go:172] (0xc000538f20) Data frame received for 3\nI0511 15:45:34.134639 1156 log.go:172] (0xc000738000) (3) Data frame handling\nI0511 15:45:34.134676 1156 log.go:172] (0xc000738000) (3) Data frame sent\nI0511 15:45:34.134698 1156 log.go:172] (0xc000538f20) Data frame received for 3\nI0511 15:45:34.134716 1156 log.go:172] (0xc000738000) (3) Data frame handling\nI0511 15:45:34.134735 1156 log.go:172] (0xc000538f20) Data frame received for 5\nI0511 15:45:34.134750 1156 log.go:172] (0xc0006b3c20) (5) Data frame handling\nI0511 15:45:34.137358 1156 log.go:172] (0xc000538f20) Data frame received for 1\nI0511 15:45:34.137400 1156 log.go:172] (0xc0006b3a40) (1) Data frame handling\nI0511 15:45:34.137433 1156 log.go:172] (0xc0006b3a40) (1) Data frame sent\nI0511 15:45:34.137456 1156 log.go:172] (0xc000538f20) (0xc0006b3a40) Stream removed, broadcasting: 1\nI0511 15:45:34.137493 1156 log.go:172] (0xc000538f20) Go away received\nI0511 15:45:34.137761 1156 log.go:172] (0xc000538f20) (0xc0006b3a40) Stream removed, broadcasting: 1\nI0511 15:45:34.137776 1156 log.go:172] (0xc000538f20) (0xc000738000) Stream removed, broadcasting: 3\nI0511 15:45:34.137783 1156 log.go:172] (0xc000538f20) (0xc0006b3c20) Stream removed, broadcasting: 5\n" May 11 15:45:34.141: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 15:45:34.141: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 15:45:34.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 15:45:34.848: INFO: stderr: "I0511 15:45:34.534376 1177 log.go:172] (0xc000a21600) (0xc000b38500) Create stream\nI0511 15:45:34.534432 1177 log.go:172] (0xc000a21600) (0xc000b38500) Stream added, broadcasting: 1\nI0511 15:45:34.538951 1177 log.go:172] (0xc000a21600) Reply frame received for 1\nI0511 15:45:34.538997 1177 log.go:172] (0xc000a21600) (0xc00060c5a0) Create stream\nI0511 15:45:34.539011 1177 log.go:172] (0xc000a21600) (0xc00060c5a0) Stream added, broadcasting: 3\nI0511 15:45:34.539781 1177 log.go:172] (0xc000a21600) Reply frame received for 3\nI0511 15:45:34.539816 1177 log.go:172] (0xc000a21600) (0xc00052f360) Create stream\nI0511 15:45:34.539833 1177 log.go:172] (0xc000a21600) (0xc00052f360) Stream added, broadcasting: 5\nI0511 15:45:34.540580 1177 log.go:172] (0xc000a21600) Reply frame received for 5\nI0511 15:45:34.609536 1177 log.go:172] (0xc000a21600) Data frame received for 5\nI0511 15:45:34.609558 1177 log.go:172] (0xc00052f360) (5) Data frame handling\nI0511 15:45:34.609570 1177 log.go:172] (0xc00052f360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 15:45:34.843334 1177 log.go:172] (0xc000a21600) Data frame received for 3\nI0511 15:45:34.843361 1177 log.go:172] (0xc00060c5a0) (3) Data frame handling\nI0511 15:45:34.843374 1177 log.go:172] (0xc00060c5a0) (3) Data frame sent\nI0511 15:45:34.843421 1177 log.go:172] (0xc000a21600) Data frame received for 3\nI0511 15:45:34.843426 1177 log.go:172] (0xc00060c5a0) (3) Data frame handling\nI0511 15:45:34.843498 1177 log.go:172] (0xc000a21600) Data frame received for 5\nI0511 15:45:34.843510 1177 log.go:172] (0xc00052f360) (5) Data frame handling\nI0511 15:45:34.844884 1177 log.go:172] (0xc000a21600) Data frame received for 1\nI0511 15:45:34.844900 1177 log.go:172] (0xc000b38500) (1) Data frame handling\nI0511 15:45:34.844922 1177 log.go:172] (0xc000b38500) (1) Data frame sent\nI0511 15:45:34.844943 1177 log.go:172] (0xc000a21600) (0xc000b38500) Stream removed, broadcasting: 1\nI0511 15:45:34.845085 1177 log.go:172] (0xc000a21600) Go away received\nI0511 15:45:34.845357 1177 log.go:172] (0xc000a21600) (0xc000b38500) Stream removed, broadcasting: 1\nI0511 15:45:34.845375 1177 log.go:172] (0xc000a21600) (0xc00060c5a0) Stream removed, broadcasting: 3\nI0511 15:45:34.845385 1177 log.go:172] (0xc000a21600) (0xc00052f360) Stream removed, broadcasting: 5\n" May 11 15:45:34.848: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 15:45:34.848: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 15:45:34.848: INFO: Waiting for statefulset status.replicas updated to 0 May 11 15:45:34.869: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 11 15:45:44.874: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 15:45:44.874: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 15:45:44.874: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 15:45:44.994: INFO: POD NODE PHASE GRACE CONDITIONS May 11 15:45:44.994: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC }] May 11 15:45:44.994: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:44.994: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:44.994: INFO: May 11 15:45:44.994: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 15:45:46.847: INFO: POD NODE PHASE GRACE CONDITIONS May 11 15:45:46.847: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC }] May 11 15:45:46.847: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:46.847: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:46.847: INFO: May 11 15:45:46.847: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 15:45:47.922: INFO: POD NODE PHASE GRACE CONDITIONS May 11 15:45:47.922: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC }] May 11 15:45:47.922: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:47.922: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:47.922: INFO: May 11 15:45:47.922: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 15:45:48.926: INFO: POD NODE PHASE GRACE CONDITIONS May 11 15:45:48.926: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC }] May 11 15:45:48.926: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:48.926: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:48.926: INFO: May 11 15:45:48.926: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 15:45:49.931: INFO: POD NODE PHASE GRACE CONDITIONS May 11 15:45:49.931: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC }] May 11 15:45:49.931: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:49.931: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:49.931: INFO: May 11 15:45:49.931: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 15:45:50.935: INFO: POD NODE PHASE GRACE CONDITIONS May 11 15:45:50.936: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC }] May 11 15:45:50.936: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:50.936: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:50.936: INFO: May 11 15:45:50.936: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 15:45:51.940: INFO: POD NODE PHASE GRACE CONDITIONS May 11 15:45:51.940: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC }] May 11 15:45:51.940: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:51.940: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:51.941: INFO: May 11 15:45:51.941: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 15:45:52.945: INFO: POD NODE PHASE GRACE CONDITIONS May 11 15:45:52.945: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC }] May 11 15:45:52.945: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:52.945: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:52.945: INFO: May 11 15:45:52.945: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 15:45:54.035: INFO: POD NODE PHASE GRACE CONDITIONS May 11 15:45:54.035: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:44:59 +0000 UTC }] May 11 15:45:54.035: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:54.035: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:45:20 +0000 UTC }] May 11 15:45:54.035: INFO: May 11 15:45:54.036: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8562 May 11 15:45:55.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:45:55.177: INFO: rc: 1 May 11 15:45:55.177: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 11 15:46:05.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:46:05.280: INFO: rc: 1 May 11 15:46:05.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:46:15.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:46:15.364: INFO: rc: 1 May 11 15:46:15.364: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:46:25.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:46:25.489: INFO: rc: 1 May 11 15:46:25.489: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:46:35.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:46:35.595: INFO: rc: 1 May 11 15:46:35.595: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:46:45.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:46:45.704: INFO: rc: 1 May 11 15:46:45.704: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:46:55.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:46:55.805: INFO: rc: 1 May 11 15:46:55.805: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:47:05.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:47:06.138: INFO: rc: 1 May 11 15:47:06.138: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:47:16.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:47:16.243: INFO: rc: 1 May 11 15:47:16.243: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:47:26.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:47:26.380: INFO: rc: 1 May 11 15:47:26.380: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:47:36.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:47:36.488: INFO: rc: 1 May 11 15:47:36.488: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:47:46.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:47:46.626: INFO: rc: 1 May 11 15:47:46.626: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:47:56.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:47:56.962: INFO: rc: 1 May 11 15:47:56.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:48:06.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:48:07.062: INFO: rc: 1 May 11 15:48:07.062: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:48:17.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:48:17.171: INFO: rc: 1 May 11 15:48:17.171: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:48:27.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:48:27.274: INFO: rc: 1 May 11 15:48:27.274: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:48:37.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:48:37.533: INFO: rc: 1 May 11 15:48:37.534: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:48:47.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:48:52.524: INFO: rc: 1 May 11 15:48:52.524: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:49:02.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:49:02.642: INFO: rc: 1 May 11 15:49:02.642: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:49:12.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:49:12.740: INFO: rc: 1 May 11 15:49:12.740: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:49:22.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:49:22.831: INFO: rc: 1 May 11 15:49:22.831: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:49:32.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:49:33.017: INFO: rc: 1 May 11 15:49:33.017: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:49:43.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:49:43.130: INFO: rc: 1 May 11 15:49:43.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:49:53.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:49:53.234: INFO: rc: 1 May 11 15:49:53.234: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:50:03.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:50:03.373: INFO: rc: 1 May 11 15:50:03.373: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:50:13.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:50:13.464: INFO: rc: 1 May 11 15:50:13.464: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:50:23.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:50:23.559: INFO: rc: 1 May 11 15:50:23.559: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:50:33.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:50:33.736: INFO: rc: 1 May 11 15:50:33.736: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:50:43.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:50:43.840: INFO: rc: 1 May 11 15:50:43.840: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:50:53.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:50:53.930: INFO: rc: 1 May 11 15:50:53.930: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 15:51:03.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8562 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 15:51:04.034: INFO: rc: 1 May 11 15:51:04.034: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 11 15:51:04.034: INFO: Scaling statefulset ss to 0 May 11 15:51:04.043: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 15:51:04.046: INFO: Deleting all statefulset in ns statefulset-8562 May 11 15:51:04.048: INFO: Scaling statefulset ss to 0 May 11 15:51:04.057: INFO: Waiting for statefulset status.replicas updated to 0 May 11 15:51:04.059: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:51:04.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8562" for this suite. • [SLOW TEST:364.644 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":87,"skipped":1558,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:51:04.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-26deb693-b00c-4bef-91b8-a3bcf29fa74b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-26deb693-b00c-4bef-91b8-a3bcf29fa74b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:51:10.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7817" for this suite. • [SLOW TEST:6.640 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1574,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:51:10.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:51:11.234: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-0ad08ae9-ad35-41aa-8720-164fabfbee49" in namespace "security-context-test-2984" to be "success or failure" May 11 15:51:11.399: INFO: Pod "busybox-readonly-false-0ad08ae9-ad35-41aa-8720-164fabfbee49": Phase="Pending", Reason="", readiness=false. Elapsed: 164.836429ms May 11 15:51:13.403: INFO: Pod "busybox-readonly-false-0ad08ae9-ad35-41aa-8720-164fabfbee49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168430217s May 11 15:51:15.406: INFO: Pod "busybox-readonly-false-0ad08ae9-ad35-41aa-8720-164fabfbee49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171922043s May 11 15:51:17.701: INFO: Pod "busybox-readonly-false-0ad08ae9-ad35-41aa-8720-164fabfbee49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466648647s May 11 15:51:19.921: INFO: Pod "busybox-readonly-false-0ad08ae9-ad35-41aa-8720-164fabfbee49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.686028488s May 11 15:51:19.921: INFO: Pod "busybox-readonly-false-0ad08ae9-ad35-41aa-8720-164fabfbee49" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:51:19.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2984" for this suite. • [SLOW TEST:9.432 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1592,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:51:20.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:51:22.014: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 451.618113ms) May 11 15:51:22.317: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 303.269177ms) May 11 15:51:22.321: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.909047ms) May 11 15:51:22.391: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 69.356206ms) May 11 15:51:22.394: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.811051ms) May 11 15:51:22.398: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.222093ms) May 11 15:51:22.582: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 184.222864ms) May 11 15:51:22.585: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.172954ms) May 11 15:51:22.589: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.577133ms) May 11 15:51:22.592: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.041476ms) May 11 15:51:22.595: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.62164ms) May 11 15:51:22.597: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.77154ms) May 11 15:51:22.600: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.836474ms) May 11 15:51:22.603: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.636602ms) May 11 15:51:22.606: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.869467ms) May 11 15:51:22.609: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.081895ms) May 11 15:51:22.612: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.07319ms) May 11 15:51:22.615: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.648608ms) May 11 15:51:22.618: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.112353ms) May 11 15:51:22.620: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.446326ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:51:22.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-579" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":90,"skipped":1593,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:51:22.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 11 15:51:23.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 11 15:51:24.378: INFO: stderr: "" May 11 15:51:24.378: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:51:24.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7689" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":91,"skipped":1608,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:51:24.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:51:25.009: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 15:51:27.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5989 create -f -' May 11 15:51:36.918: INFO: stderr: "" May 11 15:51:36.918: INFO: stdout: "e2e-test-crd-publish-openapi-6998-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 11 15:51:36.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5989 delete e2e-test-crd-publish-openapi-6998-crds test-cr' May 11 15:51:37.092: INFO: stderr: "" May 11 15:51:37.092: INFO: stdout: "e2e-test-crd-publish-openapi-6998-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 11 15:51:37.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5989 apply -f -' May 11 15:51:37.869: INFO: stderr: "" May 11 15:51:37.869: INFO: stdout: "e2e-test-crd-publish-openapi-6998-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 11 15:51:37.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5989 delete e2e-test-crd-publish-openapi-6998-crds test-cr' May 11 15:51:38.242: INFO: stderr: "" May 11 15:51:38.242: INFO: stdout: "e2e-test-crd-publish-openapi-6998-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 11 15:51:38.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6998-crds' May 11 15:51:38.702: INFO: stderr: "" May 11 15:51:38.702: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6998-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:51:42.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5989" for this suite. • [SLOW TEST:17.395 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":92,"skipped":1615,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:51:42.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 11 15:51:42.404: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 15:51:42.668: INFO: Waiting for terminating namespaces to be deleted... May 11 15:51:42.671: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 11 15:51:42.675: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:51:42.675: INFO: Container kindnet-cni ready: true, restart count 0 May 11 15:51:42.675: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:51:42.675: INFO: Container kube-proxy ready: true, restart count 0 May 11 15:51:42.675: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 11 15:51:42.681: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:51:42.681: INFO: Container kindnet-cni ready: true, restart count 0 May 11 15:51:42.681: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 11 15:51:42.681: INFO: Container kube-bench ready: false, restart count 0 May 11 15:51:42.681: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 15:51:42.681: INFO: Container kube-proxy ready: true, restart count 0 May 11 15:51:42.681: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 11 15:51:42.681: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e042b56cb4d7c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:51:43.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8841" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":93,"skipped":1628,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:51:43.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 15:51:44.073: INFO: Waiting up to 5m0s for pod "downwardapi-volume-041f8f25-4fd0-4ab2-87cb-e37ddb55f4c2" in namespace "downward-api-4309" to be "success or failure" May 11 15:51:44.184: INFO: Pod "downwardapi-volume-041f8f25-4fd0-4ab2-87cb-e37ddb55f4c2": Phase="Pending", Reason="", readiness=false. Elapsed: 111.098557ms May 11 15:51:46.188: INFO: Pod "downwardapi-volume-041f8f25-4fd0-4ab2-87cb-e37ddb55f4c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115471698s May 11 15:51:48.192: INFO: Pod "downwardapi-volume-041f8f25-4fd0-4ab2-87cb-e37ddb55f4c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11953903s STEP: Saw pod success May 11 15:51:48.192: INFO: Pod "downwardapi-volume-041f8f25-4fd0-4ab2-87cb-e37ddb55f4c2" satisfied condition "success or failure" May 11 15:51:48.196: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-041f8f25-4fd0-4ab2-87cb-e37ddb55f4c2 container client-container: STEP: delete the pod May 11 15:51:48.218: INFO: Waiting for pod downwardapi-volume-041f8f25-4fd0-4ab2-87cb-e37ddb55f4c2 to disappear May 11 15:51:48.221: INFO: Pod downwardapi-volume-041f8f25-4fd0-4ab2-87cb-e37ddb55f4c2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:51:48.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4309" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1643,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:51:48.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:51:48.558: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 11 15:51:48.630: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:48.635: INFO: Number of nodes with available pods: 0 May 11 15:51:48.635: INFO: Node jerma-worker is running more than one daemon pod May 11 15:51:49.679: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:49.719: INFO: Number of nodes with available pods: 0 May 11 15:51:49.719: INFO: Node jerma-worker is running more than one daemon pod May 11 15:51:51.032: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:51.036: INFO: Number of nodes with available pods: 0 May 11 15:51:51.036: INFO: Node jerma-worker is running more than one daemon pod May 11 15:51:51.748: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:52.185: INFO: Number of nodes with available pods: 0 May 11 15:51:52.185: INFO: Node jerma-worker is running more than one daemon pod May 11 15:51:52.885: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:52.888: INFO: Number of nodes with available pods: 0 May 11 15:51:52.888: INFO: Node jerma-worker is running more than one daemon pod May 11 15:51:53.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:54.309: INFO: Number of nodes with available pods: 0 May 11 15:51:54.309: INFO: Node jerma-worker is running more than one daemon pod May 11 15:51:54.640: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:54.644: INFO: Number of nodes with available pods: 0 May 11 15:51:54.644: INFO: Node jerma-worker is running more than one daemon pod May 11 15:51:55.639: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:55.642: INFO: Number of nodes with available pods: 1 May 11 15:51:55.642: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:51:56.639: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:56.643: INFO: Number of nodes with available pods: 2 May 11 15:51:56.643: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 11 15:51:57.023: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:51:57.023: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:51:57.305: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:58.394: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:51:58.394: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:51:58.397: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:51:59.310: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:51:59.310: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:51:59.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:00.538: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:00.538: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:00.755: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:01.310: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:01.310: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:01.310: INFO: Pod daemon-set-dz57f is not available May 11 15:52:01.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:02.311: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:02.311: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:02.311: INFO: Pod daemon-set-dz57f is not available May 11 15:52:02.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:03.540: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:03.540: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:03.540: INFO: Pod daemon-set-dz57f is not available May 11 15:52:03.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:04.308: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:04.309: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:04.309: INFO: Pod daemon-set-dz57f is not available May 11 15:52:04.312: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:05.309: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:05.309: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:05.309: INFO: Pod daemon-set-dz57f is not available May 11 15:52:05.312: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:06.309: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:06.310: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:06.310: INFO: Pod daemon-set-dz57f is not available May 11 15:52:06.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:07.310: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:07.310: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:07.310: INFO: Pod daemon-set-dz57f is not available May 11 15:52:07.313: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:08.310: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:08.310: INFO: Wrong image for pod: daemon-set-dz57f. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:08.310: INFO: Pod daemon-set-dz57f is not available May 11 15:52:08.315: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:09.372: INFO: Pod daemon-set-5g96m is not available May 11 15:52:09.372: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:09.479: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:10.309: INFO: Pod daemon-set-5g96m is not available May 11 15:52:10.309: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:10.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:11.310: INFO: Pod daemon-set-5g96m is not available May 11 15:52:11.310: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:11.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:12.308: INFO: Pod daemon-set-5g96m is not available May 11 15:52:12.308: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:12.311: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:13.496: INFO: Pod daemon-set-5g96m is not available May 11 15:52:13.496: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:13.535: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:14.308: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:14.311: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:15.308: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:15.311: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:16.316: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:16.316: INFO: Pod daemon-set-csfbd is not available May 11 15:52:16.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:17.310: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:17.310: INFO: Pod daemon-set-csfbd is not available May 11 15:52:17.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:18.419: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:18.419: INFO: Pod daemon-set-csfbd is not available May 11 15:52:18.423: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:19.309: INFO: Wrong image for pod: daemon-set-csfbd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 15:52:19.309: INFO: Pod daemon-set-csfbd is not available May 11 15:52:19.312: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:20.790: INFO: Pod daemon-set-88564 is not available May 11 15:52:20.823: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 11 15:52:20.871: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:21.250: INFO: Number of nodes with available pods: 1 May 11 15:52:21.250: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:52:22.255: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:22.258: INFO: Number of nodes with available pods: 1 May 11 15:52:22.258: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:52:23.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:23.290: INFO: Number of nodes with available pods: 1 May 11 15:52:23.290: INFO: Node jerma-worker2 is running more than one daemon pod May 11 15:52:24.255: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 15:52:24.259: INFO: Number of nodes with available pods: 2 May 11 15:52:24.259: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2390, will wait for the garbage collector to delete the pods May 11 15:52:24.329: INFO: Deleting DaemonSet.extensions daemon-set took: 6.047775ms May 11 15:52:24.729: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.453485ms May 11 15:52:39.718: INFO: Number of nodes with available pods: 0 May 11 15:52:39.718: INFO: Number of running nodes: 0, number of available pods: 0 May 11 15:52:39.720: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2390/daemonsets","resourceVersion":"15270705"},"items":null} May 11 15:52:39.722: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2390/pods","resourceVersion":"15270705"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:52:40.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2390" for this suite. • [SLOW TEST:51.885 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":95,"skipped":1654,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:52:40.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-c011ba99-bb21-4086-863b-f40142e71e42 May 11 15:52:40.474: INFO: Pod name my-hostname-basic-c011ba99-bb21-4086-863b-f40142e71e42: Found 0 pods out of 1 May 11 15:52:45.493: INFO: Pod name my-hostname-basic-c011ba99-bb21-4086-863b-f40142e71e42: Found 1 pods out of 1 May 11 15:52:45.493: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c011ba99-bb21-4086-863b-f40142e71e42" are running May 11 15:52:45.834: INFO: Pod "my-hostname-basic-c011ba99-bb21-4086-863b-f40142e71e42-ksh75" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 15:52:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 15:52:44 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 15:52:44 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 15:52:40 +0000 UTC Reason: Message:}]) May 11 15:52:45.834: INFO: Trying to dial the pod May 11 15:52:50.846: INFO: Controller my-hostname-basic-c011ba99-bb21-4086-863b-f40142e71e42: Got expected result from replica 1 [my-hostname-basic-c011ba99-bb21-4086-863b-f40142e71e42-ksh75]: "my-hostname-basic-c011ba99-bb21-4086-863b-f40142e71e42-ksh75", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:52:50.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5151" for this suite. • [SLOW TEST:10.710 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":96,"skipped":1665,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:52:50.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 15:52:51.368: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 15:52:53.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809171, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809171, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809171, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809171, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:52:55.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809171, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809171, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809171, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809171, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 15:52:58.579: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:52:58.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9437-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:52:59.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2462" for this suite. STEP: Destroying namespace "webhook-2462-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.375 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":97,"skipped":1695,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:53:00.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 11 15:53:00.675: INFO: Waiting up to 5m0s for pod "pod-06ba97de-1b68-4ea6-bd09-eb90ba13e736" in namespace "emptydir-9263" to be "success or failure" May 11 15:53:00.796: INFO: Pod "pod-06ba97de-1b68-4ea6-bd09-eb90ba13e736": Phase="Pending", Reason="", readiness=false. Elapsed: 121.303561ms May 11 15:53:02.800: INFO: Pod "pod-06ba97de-1b68-4ea6-bd09-eb90ba13e736": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125418979s May 11 15:53:05.011: INFO: Pod "pod-06ba97de-1b68-4ea6-bd09-eb90ba13e736": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33570576s STEP: Saw pod success May 11 15:53:05.011: INFO: Pod "pod-06ba97de-1b68-4ea6-bd09-eb90ba13e736" satisfied condition "success or failure" May 11 15:53:05.013: INFO: Trying to get logs from node jerma-worker pod pod-06ba97de-1b68-4ea6-bd09-eb90ba13e736 container test-container: STEP: delete the pod May 11 15:53:05.064: INFO: Waiting for pod pod-06ba97de-1b68-4ea6-bd09-eb90ba13e736 to disappear May 11 15:53:05.337: INFO: Pod pod-06ba97de-1b68-4ea6-bd09-eb90ba13e736 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:53:05.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9263" for this suite. • [SLOW TEST:5.310 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1703,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:53:05.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0511 15:53:37.030382 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 15:53:37.030: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:53:37.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8861" for this suite. • [SLOW TEST:31.497 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":99,"skipped":1716,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:53:37.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 15:53:37.478: INFO: Creating deployment "test-recreate-deployment" May 11 15:53:37.617: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 11 15:53:37.672: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 11 15:53:39.679: INFO: Waiting deployment "test-recreate-deployment" to complete May 11 15:53:39.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809217, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809217, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809217, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809217, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:53:41.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809217, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809217, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809217, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724809217, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 15:53:43.737: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 11 15:53:43.774: INFO: Updating deployment test-recreate-deployment May 11 15:53:43.774: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 11 15:53:45.002: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1044 /apis/apps/v1/namespaces/deployment-1044/deployments/test-recreate-deployment 14079609-824b-4cad-81df-1ce89364c9f8 15271145 2 2020-05-11 15:53:37 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f86278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-11 15:53:44 +0000 UTC,LastTransitionTime:2020-05-11 15:53:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-11 15:53:44 +0000 UTC,LastTransitionTime:2020-05-11 15:53:37 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 11 15:53:45.503: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1044 /apis/apps/v1/namespaces/deployment-1044/replicasets/test-recreate-deployment-5f94c574ff 182b039f-468f-4f36-87b2-896d3e54a3f4 15271143 1 2020-05-11 15:53:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 14079609-824b-4cad-81df-1ce89364c9f8 0xc00284c117 0xc00284c118}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00284c178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 15:53:45.503: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 11 15:53:45.503: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-1044 /apis/apps/v1/namespaces/deployment-1044/replicasets/test-recreate-deployment-799c574856 f1a778ac-ff3f-4b04-91d0-22c3f7899d48 15271128 2 2020-05-11 15:53:37 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 14079609-824b-4cad-81df-1ce89364c9f8 0xc00284c1e7 0xc00284c1e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00284c258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 15:53:45.569: INFO: Pod "test-recreate-deployment-5f94c574ff-srzr9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-srzr9 test-recreate-deployment-5f94c574ff- deployment-1044 /api/v1/namespaces/deployment-1044/pods/test-recreate-deployment-5f94c574ff-srzr9 67afd40c-4e83-4cfb-afd5-6eb5ae9245fe 15271144 0 2020-05-11 15:53:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 182b039f-468f-4f36-87b2-896d3e54a3f4 0xc00284c6e7 0xc00284c6e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-26jz9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-26jz9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-26jz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 15:53:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 15:53:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 15:53:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 15:53:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-11 15:53:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:53:45.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1044" for this suite. • [SLOW TEST:9.597 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":100,"skipped":1721,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:53:46.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:54:01.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2082" for this suite. • [SLOW TEST:15.240 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:54:01.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 11 15:54:02.252: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5843" to be "success or failure" May 11 15:54:02.290: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 37.973936ms May 11 15:54:04.354: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101838391s May 11 15:54:06.432: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17986566s May 11 15:54:08.642: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389429318s May 11 15:54:10.647: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394359308s May 11 15:54:12.798: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.546224816s May 11 15:54:14.803: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.550277313s STEP: Saw pod success May 11 15:54:14.803: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 11 15:54:14.806: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 11 15:54:14.868: INFO: Waiting for pod pod-host-path-test to disappear May 11 15:54:16.636: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:54:16.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5843" for this suite. • [SLOW TEST:15.154 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1777,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:54:17.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-ecb5ac3a-5933-4543-bf13-abcba7de0882 in namespace container-probe-9442 May 11 15:54:29.926: INFO: Started pod test-webserver-ecb5ac3a-5933-4543-bf13-abcba7de0882 in namespace container-probe-9442 STEP: checking the pod's current state and verifying that restartCount is present May 11 15:54:29.928: INFO: Initial restart count of pod test-webserver-ecb5ac3a-5933-4543-bf13-abcba7de0882 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:58:31.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9442" for this suite. • [SLOW TEST:254.613 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1796,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:58:31.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 15:58:32.199: INFO: Waiting up to 5m0s for pod "pod-49d6b320-ec9b-44be-a285-d720777aacab" in namespace "emptydir-5117" to be "success or failure" May 11 15:58:32.289: INFO: Pod "pod-49d6b320-ec9b-44be-a285-d720777aacab": Phase="Pending", Reason="", readiness=false. Elapsed: 90.349683ms May 11 15:58:34.441: INFO: Pod "pod-49d6b320-ec9b-44be-a285-d720777aacab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242523159s May 11 15:58:36.662: INFO: Pod "pod-49d6b320-ec9b-44be-a285-d720777aacab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.463290796s May 11 15:58:38.758: INFO: Pod "pod-49d6b320-ec9b-44be-a285-d720777aacab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.559537206s May 11 15:58:40.762: INFO: Pod "pod-49d6b320-ec9b-44be-a285-d720777aacab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.562915347s STEP: Saw pod success May 11 15:58:40.762: INFO: Pod "pod-49d6b320-ec9b-44be-a285-d720777aacab" satisfied condition "success or failure" May 11 15:58:40.764: INFO: Trying to get logs from node jerma-worker2 pod pod-49d6b320-ec9b-44be-a285-d720777aacab container test-container: STEP: delete the pod May 11 15:58:40.902: INFO: Waiting for pod pod-49d6b320-ec9b-44be-a285-d720777aacab to disappear May 11 15:58:40.904: INFO: Pod pod-49d6b320-ec9b-44be-a285-d720777aacab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:58:40.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5117" for this suite. • [SLOW TEST:9.268 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1808,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:58:40.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-277 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 15:58:40.965: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 15:59:11.837: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.240:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-277 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 15:59:11.837: INFO: >>> kubeConfig: /root/.kube/config I0511 15:59:11.867966 6 log.go:172] (0xc00278a8f0) (0xc00197f540) Create stream I0511 15:59:11.867997 6 log.go:172] (0xc00278a8f0) (0xc00197f540) Stream added, broadcasting: 1 I0511 15:59:11.870463 6 log.go:172] (0xc00278a8f0) Reply frame received for 1 I0511 15:59:11.870495 6 log.go:172] (0xc00278a8f0) (0xc00277e280) Create stream I0511 15:59:11.870505 6 log.go:172] (0xc00278a8f0) (0xc00277e280) Stream added, broadcasting: 3 I0511 15:59:11.871400 6 log.go:172] (0xc00278a8f0) Reply frame received for 3 I0511 15:59:11.871429 6 log.go:172] (0xc00278a8f0) (0xc00134c1e0) Create stream I0511 15:59:11.871442 6 log.go:172] (0xc00278a8f0) (0xc00134c1e0) Stream added, broadcasting: 5 I0511 15:59:11.872230 6 log.go:172] (0xc00278a8f0) Reply frame received for 5 I0511 15:59:11.954674 6 log.go:172] (0xc00278a8f0) Data frame received for 3 I0511 15:59:11.954710 6 log.go:172] (0xc00277e280) (3) Data frame handling I0511 15:59:11.954736 6 log.go:172] (0xc00277e280) (3) Data frame sent I0511 15:59:11.954753 6 log.go:172] (0xc00278a8f0) Data frame received for 3 I0511 15:59:11.954767 6 log.go:172] (0xc00277e280) (3) Data frame handling I0511 15:59:11.954828 6 log.go:172] (0xc00278a8f0) Data frame received for 5 I0511 15:59:11.954848 6 log.go:172] (0xc00134c1e0) (5) Data frame handling I0511 15:59:11.956250 6 log.go:172] (0xc00278a8f0) Data frame received for 1 I0511 15:59:11.956260 6 log.go:172] (0xc00197f540) (1) Data frame handling I0511 15:59:11.956265 6 log.go:172] (0xc00197f540) (1) Data frame sent I0511 15:59:11.956273 6 log.go:172] (0xc00278a8f0) (0xc00197f540) Stream removed, broadcasting: 1 I0511 15:59:11.956360 6 log.go:172] (0xc00278a8f0) Go away received I0511 15:59:11.956407 6 log.go:172] (0xc00278a8f0) (0xc00197f540) Stream removed, broadcasting: 1 I0511 15:59:11.956425 6 log.go:172] (0xc00278a8f0) (0xc00277e280) Stream removed, broadcasting: 3 I0511 15:59:11.956436 6 log.go:172] (0xc00278a8f0) (0xc00134c1e0) Stream removed, broadcasting: 5 May 11 15:59:11.956: INFO: Found all expected endpoints: [netserver-0] May 11 15:59:11.986: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.156:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-277 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 15:59:11.986: INFO: >>> kubeConfig: /root/.kube/config I0511 15:59:12.022833 6 log.go:172] (0xc00278af20) (0xc00197fae0) Create stream I0511 15:59:12.022862 6 log.go:172] (0xc00278af20) (0xc00197fae0) Stream added, broadcasting: 1 I0511 15:59:12.025480 6 log.go:172] (0xc00278af20) Reply frame received for 1 I0511 15:59:12.025523 6 log.go:172] (0xc00278af20) (0xc00134c320) Create stream I0511 15:59:12.025538 6 log.go:172] (0xc00278af20) (0xc00134c320) Stream added, broadcasting: 3 I0511 15:59:12.026479 6 log.go:172] (0xc00278af20) Reply frame received for 3 I0511 15:59:12.026518 6 log.go:172] (0xc00278af20) (0xc00240b540) Create stream I0511 15:59:12.026535 6 log.go:172] (0xc00278af20) (0xc00240b540) Stream added, broadcasting: 5 I0511 15:59:12.027389 6 log.go:172] (0xc00278af20) Reply frame received for 5 I0511 15:59:12.090187 6 log.go:172] (0xc00278af20) Data frame received for 5 I0511 15:59:12.090228 6 log.go:172] (0xc00240b540) (5) Data frame handling I0511 15:59:12.090255 6 log.go:172] (0xc00278af20) Data frame received for 3 I0511 15:59:12.090270 6 log.go:172] (0xc00134c320) (3) Data frame handling I0511 15:59:12.090292 6 log.go:172] (0xc00134c320) (3) Data frame sent I0511 15:59:12.090311 6 log.go:172] (0xc00278af20) Data frame received for 3 I0511 15:59:12.090324 6 log.go:172] (0xc00134c320) (3) Data frame handling I0511 15:59:12.092271 6 log.go:172] (0xc00278af20) Data frame received for 1 I0511 15:59:12.092323 6 log.go:172] (0xc00197fae0) (1) Data frame handling I0511 15:59:12.092355 6 log.go:172] (0xc00197fae0) (1) Data frame sent I0511 15:59:12.092372 6 log.go:172] (0xc00278af20) (0xc00197fae0) Stream removed, broadcasting: 1 I0511 15:59:12.092390 6 log.go:172] (0xc00278af20) Go away received I0511 15:59:12.092563 6 log.go:172] (0xc00278af20) (0xc00197fae0) Stream removed, broadcasting: 1 I0511 15:59:12.092609 6 log.go:172] (0xc00278af20) (0xc00134c320) Stream removed, broadcasting: 3 I0511 15:59:12.092639 6 log.go:172] (0xc00278af20) (0xc00240b540) Stream removed, broadcasting: 5 May 11 15:59:12.092: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:59:12.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-277" for this suite. • [SLOW TEST:31.189 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1820,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:59:12.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 15:59:17.402: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:59:17.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-738" for this suite. • [SLOW TEST:5.755 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1822,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:59:17.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-f41889b4-6103-474f-9ff4-1024c380e1a2 STEP: Creating a pod to test consume configMaps May 11 15:59:19.071: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e445953f-aa4b-43bf-a9a7-f2fe55016559" in namespace "projected-7744" to be "success or failure" May 11 15:59:19.225: INFO: Pod "pod-projected-configmaps-e445953f-aa4b-43bf-a9a7-f2fe55016559": Phase="Pending", Reason="", readiness=false. Elapsed: 154.130035ms May 11 15:59:22.253: INFO: Pod "pod-projected-configmaps-e445953f-aa4b-43bf-a9a7-f2fe55016559": Phase="Pending", Reason="", readiness=false. Elapsed: 3.182029503s May 11 15:59:24.315: INFO: Pod "pod-projected-configmaps-e445953f-aa4b-43bf-a9a7-f2fe55016559": Phase="Pending", Reason="", readiness=false. Elapsed: 5.244306501s May 11 15:59:26.319: INFO: Pod "pod-projected-configmaps-e445953f-aa4b-43bf-a9a7-f2fe55016559": Phase="Running", Reason="", readiness=true. Elapsed: 7.247721157s May 11 15:59:28.323: INFO: Pod "pod-projected-configmaps-e445953f-aa4b-43bf-a9a7-f2fe55016559": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.251850576s STEP: Saw pod success May 11 15:59:28.323: INFO: Pod "pod-projected-configmaps-e445953f-aa4b-43bf-a9a7-f2fe55016559" satisfied condition "success or failure" May 11 15:59:28.326: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e445953f-aa4b-43bf-a9a7-f2fe55016559 container projected-configmap-volume-test: STEP: delete the pod May 11 15:59:28.359: INFO: Waiting for pod pod-projected-configmaps-e445953f-aa4b-43bf-a9a7-f2fe55016559 to disappear May 11 15:59:28.369: INFO: Pod pod-projected-configmaps-e445953f-aa4b-43bf-a9a7-f2fe55016559 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:59:28.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7744" for this suite. • [SLOW TEST:10.518 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:59:28.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3091 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-3091 May 11 15:59:28.812: INFO: Found 0 stateful pods, waiting for 1 May 11 15:59:38.939: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 15:59:38.993: INFO: Deleting all statefulset in ns statefulset-3091 May 11 15:59:39.258: INFO: Scaling statefulset ss to 0 May 11 15:59:59.944: INFO: Waiting for statefulset status.replicas updated to 0 May 11 15:59:59.947: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 15:59:59.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3091" for this suite. • [SLOW TEST:31.599 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":108,"skipped":1877,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 15:59:59.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-59e50a32-3afc-48c3-94d5-0a987afca048 STEP: Creating a pod to test consume configMaps May 11 16:00:00.040: INFO: Waiting up to 5m0s for pod "pod-configmaps-b757f74a-da4b-4c22-8b36-3e91c50c2809" in namespace "configmap-7703" to be "success or failure" May 11 16:00:00.043: INFO: Pod "pod-configmaps-b757f74a-da4b-4c22-8b36-3e91c50c2809": Phase="Pending", Reason="", readiness=false. Elapsed: 3.543255ms May 11 16:00:02.208: INFO: Pod "pod-configmaps-b757f74a-da4b-4c22-8b36-3e91c50c2809": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168106304s May 11 16:00:04.256: INFO: Pod "pod-configmaps-b757f74a-da4b-4c22-8b36-3e91c50c2809": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216640843s May 11 16:00:06.411: INFO: Pod "pod-configmaps-b757f74a-da4b-4c22-8b36-3e91c50c2809": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.370886081s STEP: Saw pod success May 11 16:00:06.411: INFO: Pod "pod-configmaps-b757f74a-da4b-4c22-8b36-3e91c50c2809" satisfied condition "success or failure" May 11 16:00:06.456: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b757f74a-da4b-4c22-8b36-3e91c50c2809 container configmap-volume-test: STEP: delete the pod May 11 16:00:06.563: INFO: Waiting for pod pod-configmaps-b757f74a-da4b-4c22-8b36-3e91c50c2809 to disappear May 11 16:00:06.608: INFO: Pod pod-configmaps-b757f74a-da4b-4c22-8b36-3e91c50c2809 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:00:06.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7703" for this suite. • [SLOW TEST:6.660 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1919,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:00:06.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 11 16:00:06.798: INFO: >>> kubeConfig: /root/.kube/config May 11 16:00:09.275: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:00:22.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7320" for this suite. • [SLOW TEST:15.839 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":110,"skipped":1928,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:00:22.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:00:29.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3483" for this suite. STEP: Destroying namespace "nsdeletetest-5408" for this suite. May 11 16:00:29.437: INFO: Namespace nsdeletetest-5408 was already deleted STEP: Destroying namespace "nsdeletetest-2115" for this suite. • [SLOW TEST:6.966 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":111,"skipped":1933,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:00:29.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:00:51.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6864" for this suite. • [SLOW TEST:22.105 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":112,"skipped":1946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:00:51.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:01:08.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1049" for this suite. • [SLOW TEST:17.272 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":113,"skipped":1975,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:01:08.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:01:09.230: INFO: Creating deployment "webserver-deployment" May 11 16:01:09.297: INFO: Waiting for observed generation 1 May 11 16:01:11.581: INFO: Waiting for all required pods to come up May 11 16:01:11.619: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 11 16:01:23.629: INFO: Waiting for deployment "webserver-deployment" to complete May 11 16:01:23.635: INFO: Updating deployment "webserver-deployment" with a non-existent image May 11 16:01:23.641: INFO: Updating deployment webserver-deployment May 11 16:01:23.641: INFO: Waiting for observed generation 2 May 11 16:01:25.732: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 11 16:01:25.735: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 11 16:01:25.737: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 11 16:01:25.743: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 11 16:01:25.743: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 11 16:01:25.744: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 11 16:01:25.747: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 11 16:01:25.747: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 11 16:01:25.752: INFO: Updating deployment webserver-deployment May 11 16:01:25.752: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 11 16:01:26.792: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 11 16:01:27.344: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 11 16:01:28.403: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4046 /apis/apps/v1/namespaces/deployment-4046/deployments/webserver-deployment ec21c738-4fbe-45e8-8998-936d7bf61eba 15273048 3 2020-05-11 16:01:09 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004138ea8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-11 16:01:24 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-11 16:01:26 +0000 UTC,LastTransitionTime:2020-05-11 16:01:26 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 11 16:01:28.451: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-4046 /apis/apps/v1/namespaces/deployment-4046/replicasets/webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 15273094 3 2020-05-11 16:01:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ec21c738-4fbe-45e8-8998-936d7bf61eba 0xc004139377 0xc004139378}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004139428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 16:01:28.451: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 11 16:01:28.451: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-4046 /apis/apps/v1/namespaces/deployment-4046/replicasets/webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 15273097 3 2020-05-11 16:01:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ec21c738-4fbe-45e8-8998-936d7bf61eba 0xc0041392b7 0xc0041392b8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004139318 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 11 16:01:28.620: INFO: Pod "webserver-deployment-595b5b9587-5k8nf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5k8nf webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-5k8nf b9c55f4f-d24a-4c1d-8e16-dd59da856f2d 15272892 0 2020-05-11 16:01:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc004139927 0xc004139928}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.245,StartTime:2020-05-11 16:01:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:01:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e3914b6db28ef866a7d7603337f1cfcb87be020db9aac231574c8c81357e6d76,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.620: INFO: Pod "webserver-deployment-595b5b9587-7d5h9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7d5h9 webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-7d5h9 ae6731cd-8d9d-4ac0-b2c8-f2e56ba9d80f 15272928 0 2020-05-11 16:01:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc004139aa7 0xc004139aa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.162,StartTime:2020-05-11 16:01:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:01:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ebbfa7fe0f2c0ccd8055a2f48faf2870ca47c8c67ed11b294ba0c680508249f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.621: INFO: Pod "webserver-deployment-595b5b9587-9cpzs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9cpzs webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-9cpzs 278a1399-8842-43cf-85de-adb880b44d91 15273077 0 2020-05-11 16:01:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc004139c77 0xc004139c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.621: INFO: Pod "webserver-deployment-595b5b9587-9r54n" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9r54n webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-9r54n 67b62cda-66ee-4cac-8d53-dc2aa9ed8972 15272959 0 2020-05-11 16:01:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc004139dd7 0xc004139dd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.249,StartTime:2020-05-11 16:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:01:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e1f6f6e2718d906734e012b26b527c856819d33608ed7785f76cbf5082a93500,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.621: INFO: Pod "webserver-deployment-595b5b9587-cf87p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cf87p webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-cf87p 0f57ede2-42b5-470a-9af5-1dcf8fe61a56 15273104 0 2020-05-11 16:01:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc004139fd7 0xc004139fd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-11 16:01:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.621: INFO: Pod "webserver-deployment-595b5b9587-clsgt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-clsgt webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-clsgt 92c78192-62a2-4cd3-9b17-d44ceb947928 15273047 0 2020-05-11 16:01:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ee157 0xc0040ee158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.622: INFO: Pod "webserver-deployment-595b5b9587-g6f4d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g6f4d webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-g6f4d 7ff473fe-e4ac-4c18-a847-272d5d90faef 15273089 0 2020-05-11 16:01:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ee277 0xc0040ee278}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-11 16:01:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.622: INFO: Pod "webserver-deployment-595b5b9587-hbxt6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hbxt6 webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-hbxt6 cec963ca-5c2c-4c24-970d-d8299d28be82 15272937 0 2020-05-11 16:01:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ee437 0xc0040ee438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.247,StartTime:2020-05-11 16:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:01:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2249433d1b8acb840f5f07b3b185d8566ce51c3e5c62ed73062860a0720f8fd6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.622: INFO: Pod "webserver-deployment-595b5b9587-hwlw4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hwlw4 webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-hwlw4 7b940df4-1ada-4255-ad97-a0bebe363f2a 15273080 0 2020-05-11 16:01:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ee5d7 0xc0040ee5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.622: INFO: Pod "webserver-deployment-595b5b9587-jzbqz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jzbqz webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-jzbqz 908fda35-5e06-4318-8efb-db5317fa8fb5 15273059 0 2020-05-11 16:01:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ee737 0xc0040ee738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.623: INFO: Pod "webserver-deployment-595b5b9587-lh86f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lh86f webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-lh86f cdcaaae9-061e-4c19-afc8-f63aae0ce187 15273082 0 2020-05-11 16:01:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ee887 0xc0040ee888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.623: INFO: Pod "webserver-deployment-595b5b9587-md7rb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-md7rb webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-md7rb 72e958e5-24de-4d6e-a934-0075927b159f 15272953 0 2020-05-11 16:01:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ee9d7 0xc0040ee9d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.248,StartTime:2020-05-11 16:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:01:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8331f9b13675bd46e31e0f8fc00e59f6699d9479799145b964c8fe9dbd822bcf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.623: INFO: Pod "webserver-deployment-595b5b9587-mwdfv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mwdfv webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-mwdfv 9137bb01-f9a0-43ed-a120-750b9f2f6a52 15272925 0 2020-05-11 16:01:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040eeb87 0xc0040eeb88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.161,StartTime:2020-05-11 16:01:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:01:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7936f375cb50a19f5991f953da7abdfb669f1c7fa34263a6302edc8e8fb9b07e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.623: INFO: Pod "webserver-deployment-595b5b9587-nm757" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nm757 webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-nm757 410ae2f7-c5c2-4e8b-a430-552e04e1d9c6 15273063 0 2020-05-11 16:01:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040eedc7 0xc0040eedc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.623: INFO: Pod "webserver-deployment-595b5b9587-nm799" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nm799 webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-nm799 1d8cf713-f532-48c7-a406-32b9b6e1f206 15273081 0 2020-05-11 16:01:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040eef47 0xc0040eef48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.623: INFO: Pod "webserver-deployment-595b5b9587-rczx6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rczx6 webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-rczx6 406c7054-ea0c-44bd-8689-0c539d2d2f1b 15273066 0 2020-05-11 16:01:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ef067 0xc0040ef068}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.624: INFO: Pod "webserver-deployment-595b5b9587-rjc8p" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rjc8p webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-rjc8p 12064fab-cd60-4efa-a4c9-3857a09c52fb 15272947 0 2020-05-11 16:01:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ef1c7 0xc0040ef1c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.163,StartTime:2020-05-11 16:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:01:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9c9797b7fa1b17a90cae321394a80563fd8d76a15f7de1e53adc0382a30521a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.624: INFO: Pod "webserver-deployment-595b5b9587-rv2b7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rv2b7 webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-rv2b7 7d1e5f0b-cb23-4d97-8a32-cdac56798aba 15272915 0 2020-05-11 16:01:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ef3d7 0xc0040ef3d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.246,StartTime:2020-05-11 16:01:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:01:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3253f0abbc3794a9f1f2d94dba492bd37a28f7ed87f63cb2c10e64d0bad76302,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.624: INFO: Pod "webserver-deployment-595b5b9587-wcwnz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wcwnz webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-wcwnz 024a2d74-f893-4d39-a5e1-ba3b60e56670 15273065 0 2020-05-11 16:01:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ef587 0xc0040ef588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.624: INFO: Pod "webserver-deployment-595b5b9587-ws8g9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ws8g9 webserver-deployment-595b5b9587- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-595b5b9587-ws8g9 484af8f4-79f2-4b58-9652-1189b6cf9ab0 15273084 0 2020-05-11 16:01:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4cd35b8a-86dd-4fa3-a103-765da3a6b1f0 0xc0040ef707 0xc0040ef708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.624: INFO: Pod "webserver-deployment-c7997dcc8-52scc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-52scc webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-52scc 4c8d7896-145f-4191-bf1a-d3a0592ff66a 15273003 0 2020-05-11 16:01:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040ef847 0xc0040ef848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-11 16:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.624: INFO: Pod "webserver-deployment-c7997dcc8-5chmq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5chmq webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-5chmq bea0ca40-749b-416a-a6ac-5c7f0aba5ca0 15273075 0 2020-05-11 16:01:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040ef9c7 0xc0040ef9c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.624: INFO: Pod "webserver-deployment-c7997dcc8-5k5vl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5k5vl webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-5k5vl 16bfd68f-bf61-49e2-b9a8-43695a6fe698 15272994 0 2020-05-11 16:01:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040efaf7 0xc0040efaf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-11 16:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.625: INFO: Pod "webserver-deployment-c7997dcc8-5x77m" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5x77m webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-5x77m ee3eebca-739c-4a0b-bab8-df73d8d7ac82 15273071 0 2020-05-11 16:01:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040efc77 0xc0040efc78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.625: INFO: Pod "webserver-deployment-c7997dcc8-8x87g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8x87g webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-8x87g 0cc29471-bfaf-48ea-af77-78f00454cc60 15273074 0 2020-05-11 16:01:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040efda7 0xc0040efda8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.625: INFO: Pod "webserver-deployment-c7997dcc8-c4khb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c4khb webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-c4khb 4bacf6bf-e8b8-4730-bc87-13f9e9c7d82e 15273064 0 2020-05-11 16:01:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040efed7 0xc0040efed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.625: INFO: Pod "webserver-deployment-c7997dcc8-hvfpp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hvfpp webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-hvfpp 078eb69b-9376-4ff4-b0dc-3f705b59633d 15273010 0 2020-05-11 16:01:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040d0017 0xc0040d0018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-11 16:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.625: INFO: Pod "webserver-deployment-c7997dcc8-l56cs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l56cs webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-l56cs 3014e065-362e-4929-8b27-04ed3976845a 15273092 0 2020-05-11 16:01:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040d01c7 0xc0040d01c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-11 16:01:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.625: INFO: Pod "webserver-deployment-c7997dcc8-n4sdj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n4sdj webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-n4sdj bad43793-6aad-4ec2-a5a9-2e36059930db 15273083 0 2020-05-11 16:01:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040d03a7 0xc0040d03a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.625: INFO: Pod "webserver-deployment-c7997dcc8-qrg92" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qrg92 webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-qrg92 719ffff0-2a65-470c-99df-9e0a6f495cbe 15273021 0 2020-05-11 16:01:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040d04f7 0xc0040d04f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-11 16:01:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.626: INFO: Pod "webserver-deployment-c7997dcc8-v5r6b" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v5r6b webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-v5r6b bb1d67c4-5d82-48a5-9a44-dbbbf6d25805 15273090 0 2020-05-11 16:01:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040d0677 0xc0040d0678}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.626: INFO: Pod "webserver-deployment-c7997dcc8-x7f2f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x7f2f webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-x7f2f bccf5bb3-ee55-4b64-b607-95d5ec531ac0 15273027 0 2020-05-11 16:01:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040d07a7 0xc0040d07a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-11 16:01:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 16:01:28.626: INFO: Pod "webserver-deployment-c7997dcc8-z2q9x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z2q9x webserver-deployment-c7997dcc8- deployment-4046 /api/v1/namespaces/deployment-4046/pods/webserver-deployment-c7997dcc8-z2q9x 06b26bdc-e815-4882-bfab-693dcc3e1317 15273076 0 2020-05-11 16:01:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3c04678b-452f-491c-9def-b5ab66874001 0xc0040d0927 0xc0040d0928}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nl5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nl5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nl5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:01:28.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4046" for this suite. • [SLOW TEST:20.103 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":114,"skipped":1980,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:01:28.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:02:41.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1617" for this suite. • [SLOW TEST:72.398 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1981,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:02:41.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-815 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 11 16:02:41.704: INFO: Found 0 stateful pods, waiting for 3 May 11 16:02:51.708: INFO: Found 2 stateful pods, waiting for 3 May 11 16:03:01.847: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 16:03:01.847: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 16:03:01.847: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 11 16:03:02.974: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 11 16:03:13.176: INFO: Updating stateful set ss2 May 11 16:03:13.190: INFO: Waiting for Pod statefulset-815/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 11 16:03:23.400: INFO: Found 2 stateful pods, waiting for 3 May 11 16:03:33.404: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 16:03:33.404: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 16:03:33.404: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 16:03:43.405: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 16:03:43.405: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 16:03:43.405: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 11 16:03:43.428: INFO: Updating stateful set ss2 May 11 16:03:44.142: INFO: Waiting for Pod statefulset-815/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 16:03:54.167: INFO: Updating stateful set ss2 May 11 16:03:54.266: INFO: Waiting for StatefulSet statefulset-815/ss2 to complete update May 11 16:03:54.266: INFO: Waiting for Pod statefulset-815/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 16:04:04.286: INFO: Waiting for StatefulSet statefulset-815/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 16:04:15.147: INFO: Deleting all statefulset in ns statefulset-815 May 11 16:04:15.151: INFO: Scaling statefulset ss2 to 0 May 11 16:04:46.595: INFO: Waiting for statefulset status.replicas updated to 0 May 11 16:04:46.605: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:04:46.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-815" for this suite. • [SLOW TEST:125.353 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":116,"skipped":1987,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:04:46.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5655 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5655 I0511 16:04:47.094813 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5655, replica count: 2 I0511 16:04:50.145308 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:04:53.145530 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:04:56.145798 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 16:04:56.145: INFO: Creating new exec pod May 11 16:05:01.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5655 execpodn9p7g -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 11 16:05:11.727: INFO: stderr: "I0511 16:05:11.640756 1946 log.go:172] (0xc0000ff6b0) (0xc0004f6780) Create stream\nI0511 16:05:11.640808 1946 log.go:172] (0xc0000ff6b0) (0xc0004f6780) Stream added, broadcasting: 1\nI0511 16:05:11.644020 1946 log.go:172] (0xc0000ff6b0) Reply frame received for 1\nI0511 16:05:11.644128 1946 log.go:172] (0xc0000ff6b0) (0xc0007b15e0) Create stream\nI0511 16:05:11.644145 1946 log.go:172] (0xc0000ff6b0) (0xc0007b15e0) Stream added, broadcasting: 3\nI0511 16:05:11.645338 1946 log.go:172] (0xc0000ff6b0) Reply frame received for 3\nI0511 16:05:11.645369 1946 log.go:172] (0xc0000ff6b0) (0xc000708000) Create stream\nI0511 16:05:11.645382 1946 log.go:172] (0xc0000ff6b0) (0xc000708000) Stream added, broadcasting: 5\nI0511 16:05:11.646398 1946 log.go:172] (0xc0000ff6b0) Reply frame received for 5\nI0511 16:05:11.720104 1946 log.go:172] (0xc0000ff6b0) Data frame received for 5\nI0511 16:05:11.720147 1946 log.go:172] (0xc000708000) (5) Data frame handling\nI0511 16:05:11.720180 1946 log.go:172] (0xc000708000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0511 16:05:11.720309 1946 log.go:172] (0xc0000ff6b0) Data frame received for 5\nI0511 16:05:11.720338 1946 log.go:172] (0xc000708000) (5) Data frame handling\nI0511 16:05:11.720370 1946 log.go:172] (0xc000708000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0511 16:05:11.720579 1946 log.go:172] (0xc0000ff6b0) Data frame received for 3\nI0511 16:05:11.720616 1946 log.go:172] (0xc0007b15e0) (3) Data frame handling\nI0511 16:05:11.720793 1946 log.go:172] (0xc0000ff6b0) Data frame received for 5\nI0511 16:05:11.720805 1946 log.go:172] (0xc000708000) (5) Data frame handling\nI0511 16:05:11.722439 1946 log.go:172] (0xc0000ff6b0) Data frame received for 1\nI0511 16:05:11.722454 1946 log.go:172] (0xc0004f6780) (1) Data frame handling\nI0511 16:05:11.722461 1946 log.go:172] (0xc0004f6780) (1) Data frame sent\nI0511 16:05:11.722470 1946 log.go:172] (0xc0000ff6b0) (0xc0004f6780) Stream removed, broadcasting: 1\nI0511 16:05:11.722513 1946 log.go:172] (0xc0000ff6b0) Go away received\nI0511 16:05:11.722707 1946 log.go:172] (0xc0000ff6b0) (0xc0004f6780) Stream removed, broadcasting: 1\nI0511 16:05:11.722718 1946 log.go:172] (0xc0000ff6b0) (0xc0007b15e0) Stream removed, broadcasting: 3\nI0511 16:05:11.722724 1946 log.go:172] (0xc0000ff6b0) (0xc000708000) Stream removed, broadcasting: 5\n" May 11 16:05:11.727: INFO: stdout: "" May 11 16:05:11.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5655 execpodn9p7g -- /bin/sh -x -c nc -zv -t -w 2 10.110.144.110 80' May 11 16:05:11.920: INFO: stderr: "I0511 16:05:11.857424 1974 log.go:172] (0xc000932000) (0xc000900000) Create stream\nI0511 16:05:11.857470 1974 log.go:172] (0xc000932000) (0xc000900000) Stream added, broadcasting: 1\nI0511 16:05:11.859350 1974 log.go:172] (0xc000932000) Reply frame received for 1\nI0511 16:05:11.859401 1974 log.go:172] (0xc000932000) (0xc0008da000) Create stream\nI0511 16:05:11.859417 1974 log.go:172] (0xc000932000) (0xc0008da000) Stream added, broadcasting: 3\nI0511 16:05:11.860170 1974 log.go:172] (0xc000932000) Reply frame received for 3\nI0511 16:05:11.860194 1974 log.go:172] (0xc000932000) (0xc0009000a0) Create stream\nI0511 16:05:11.860201 1974 log.go:172] (0xc000932000) (0xc0009000a0) Stream added, broadcasting: 5\nI0511 16:05:11.860991 1974 log.go:172] (0xc000932000) Reply frame received for 5\nI0511 16:05:11.912894 1974 log.go:172] (0xc000932000) Data frame received for 5\nI0511 16:05:11.912943 1974 log.go:172] (0xc0009000a0) (5) Data frame handling\nI0511 16:05:11.912964 1974 log.go:172] (0xc0009000a0) (5) Data frame sent\nI0511 16:05:11.912975 1974 log.go:172] (0xc000932000) Data frame received for 5\nI0511 16:05:11.912981 1974 log.go:172] (0xc0009000a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.144.110 80\nConnection to 10.110.144.110 80 port [tcp/http] succeeded!\nI0511 16:05:11.913004 1974 log.go:172] (0xc000932000) Data frame received for 3\nI0511 16:05:11.913017 1974 log.go:172] (0xc0008da000) (3) Data frame handling\nI0511 16:05:11.914563 1974 log.go:172] (0xc000932000) Data frame received for 1\nI0511 16:05:11.914587 1974 log.go:172] (0xc000900000) (1) Data frame handling\nI0511 16:05:11.914600 1974 log.go:172] (0xc000900000) (1) Data frame sent\nI0511 16:05:11.914621 1974 log.go:172] (0xc000932000) (0xc000900000) Stream removed, broadcasting: 1\nI0511 16:05:11.914705 1974 log.go:172] (0xc000932000) Go away received\nI0511 16:05:11.915032 1974 log.go:172] (0xc000932000) (0xc000900000) Stream removed, broadcasting: 1\nI0511 16:05:11.915065 1974 log.go:172] (0xc000932000) (0xc0008da000) Stream removed, broadcasting: 3\nI0511 16:05:11.915083 1974 log.go:172] (0xc000932000) (0xc0009000a0) Stream removed, broadcasting: 5\n" May 11 16:05:11.920: INFO: stdout: "" May 11 16:05:11.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5655 execpodn9p7g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31535' May 11 16:05:12.105: INFO: stderr: "I0511 16:05:12.042568 1993 log.go:172] (0xc0008d6630) (0xc00068df40) Create stream\nI0511 16:05:12.042620 1993 log.go:172] (0xc0008d6630) (0xc00068df40) Stream added, broadcasting: 1\nI0511 16:05:12.044661 1993 log.go:172] (0xc0008d6630) Reply frame received for 1\nI0511 16:05:12.044698 1993 log.go:172] (0xc0008d6630) (0xc0005dc820) Create stream\nI0511 16:05:12.044709 1993 log.go:172] (0xc0008d6630) (0xc0005dc820) Stream added, broadcasting: 3\nI0511 16:05:12.045678 1993 log.go:172] (0xc0008d6630) Reply frame received for 3\nI0511 16:05:12.045700 1993 log.go:172] (0xc0008d6630) (0xc0006f4c80) Create stream\nI0511 16:05:12.045707 1993 log.go:172] (0xc0008d6630) (0xc0006f4c80) Stream added, broadcasting: 5\nI0511 16:05:12.046574 1993 log.go:172] (0xc0008d6630) Reply frame received for 5\nI0511 16:05:12.097756 1993 log.go:172] (0xc0008d6630) Data frame received for 5\nI0511 16:05:12.097774 1993 log.go:172] (0xc0006f4c80) (5) Data frame handling\nI0511 16:05:12.097782 1993 log.go:172] (0xc0006f4c80) (5) Data frame sent\nI0511 16:05:12.097787 1993 log.go:172] (0xc0008d6630) Data frame received for 5\nI0511 16:05:12.097791 1993 log.go:172] (0xc0006f4c80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31535\nConnection to 172.17.0.10 31535 port [tcp/31535] succeeded!\nI0511 16:05:12.097869 1993 log.go:172] (0xc0008d6630) Data frame received for 3\nI0511 16:05:12.097880 1993 log.go:172] (0xc0005dc820) (3) Data frame handling\nI0511 16:05:12.099447 1993 log.go:172] (0xc0008d6630) Data frame received for 1\nI0511 16:05:12.099466 1993 log.go:172] (0xc00068df40) (1) Data frame handling\nI0511 16:05:12.099477 1993 log.go:172] (0xc00068df40) (1) Data frame sent\nI0511 16:05:12.099619 1993 log.go:172] (0xc0008d6630) (0xc00068df40) Stream removed, broadcasting: 1\nI0511 16:05:12.099923 1993 log.go:172] (0xc0008d6630) Go away received\nI0511 16:05:12.100016 1993 log.go:172] (0xc0008d6630) (0xc00068df40) Stream removed, broadcasting: 1\nI0511 16:05:12.100041 1993 log.go:172] (0xc0008d6630) (0xc0005dc820) Stream removed, broadcasting: 3\nI0511 16:05:12.100052 1993 log.go:172] (0xc0008d6630) (0xc0006f4c80) Stream removed, broadcasting: 5\n" May 11 16:05:12.105: INFO: stdout: "" May 11 16:05:12.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5655 execpodn9p7g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31535' May 11 16:05:12.299: INFO: stderr: "I0511 16:05:12.231746 2013 log.go:172] (0xc0008d2a50) (0xc00085a000) Create stream\nI0511 16:05:12.231824 2013 log.go:172] (0xc0008d2a50) (0xc00085a000) Stream added, broadcasting: 1\nI0511 16:05:12.233775 2013 log.go:172] (0xc0008d2a50) Reply frame received for 1\nI0511 16:05:12.233834 2013 log.go:172] (0xc0008d2a50) (0xc00089a0a0) Create stream\nI0511 16:05:12.233863 2013 log.go:172] (0xc0008d2a50) (0xc00089a0a0) Stream added, broadcasting: 3\nI0511 16:05:12.234683 2013 log.go:172] (0xc0008d2a50) Reply frame received for 3\nI0511 16:05:12.234737 2013 log.go:172] (0xc0008d2a50) (0xc00089a140) Create stream\nI0511 16:05:12.234762 2013 log.go:172] (0xc0008d2a50) (0xc00089a140) Stream added, broadcasting: 5\nI0511 16:05:12.235547 2013 log.go:172] (0xc0008d2a50) Reply frame received for 5\nI0511 16:05:12.294365 2013 log.go:172] (0xc0008d2a50) Data frame received for 3\nI0511 16:05:12.294390 2013 log.go:172] (0xc00089a0a0) (3) Data frame handling\nI0511 16:05:12.294427 2013 log.go:172] (0xc0008d2a50) Data frame received for 5\nI0511 16:05:12.294472 2013 log.go:172] (0xc00089a140) (5) Data frame handling\nI0511 16:05:12.294500 2013 log.go:172] (0xc00089a140) (5) Data frame sent\nI0511 16:05:12.294513 2013 log.go:172] (0xc0008d2a50) Data frame received for 5\nI0511 16:05:12.294520 2013 log.go:172] (0xc00089a140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31535\nConnection to 172.17.0.8 31535 port [tcp/31535] succeeded!\nI0511 16:05:12.295948 2013 log.go:172] (0xc0008d2a50) Data frame received for 1\nI0511 16:05:12.295965 2013 log.go:172] (0xc00085a000) (1) Data frame handling\nI0511 16:05:12.295980 2013 log.go:172] (0xc00085a000) (1) Data frame sent\nI0511 16:05:12.295989 2013 log.go:172] (0xc0008d2a50) (0xc00085a000) Stream removed, broadcasting: 1\nI0511 16:05:12.296072 2013 log.go:172] (0xc0008d2a50) Go away received\nI0511 16:05:12.296345 2013 log.go:172] (0xc0008d2a50) (0xc00085a000) Stream removed, broadcasting: 1\nI0511 16:05:12.296360 2013 log.go:172] (0xc0008d2a50) (0xc00089a0a0) Stream removed, broadcasting: 3\nI0511 16:05:12.296369 2013 log.go:172] (0xc0008d2a50) (0xc00089a140) Stream removed, broadcasting: 5\n" May 11 16:05:12.299: INFO: stdout: "" May 11 16:05:12.299: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:05:12.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5655" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.352 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":117,"skipped":2035,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:05:13.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-82637046-e6bf-4441-adcc-a5bf450fc90a STEP: Creating secret with name s-test-opt-upd-0c34cebb-b297-42f3-998c-6a5f7fe97ee1 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-82637046-e6bf-4441-adcc-a5bf450fc90a STEP: Updating secret s-test-opt-upd-0c34cebb-b297-42f3-998c-6a5f7fe97ee1 STEP: Creating secret with name s-test-opt-create-d4a7d966-574e-455f-8b10-49e0a19f0821 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:06:54.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7893" for this suite. • [SLOW TEST:101.287 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":2043,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:06:54.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:06:54.616: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7219 I0511 16:06:54.659196 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7219, replica count: 1 I0511 16:06:55.709675 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:06:56.709852 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:06:57.710072 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:06:58.710263 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:06:59.710459 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:07:00.710694 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 16:07:01.292: INFO: Created: latency-svc-cvpgc May 11 16:07:01.581: INFO: Got endpoints: latency-svc-cvpgc [770.814783ms] May 11 16:07:01.855: INFO: Created: latency-svc-gkfmn May 11 16:07:01.874: INFO: Got endpoints: latency-svc-gkfmn [292.652794ms] May 11 16:07:02.122: INFO: Created: latency-svc-vqs67 May 11 16:07:02.784: INFO: Got endpoints: latency-svc-vqs67 [1.202495707s] May 11 16:07:02.787: INFO: Created: latency-svc-zsxx6 May 11 16:07:02.920: INFO: Got endpoints: latency-svc-zsxx6 [1.33871175s] May 11 16:07:02.969: INFO: Created: latency-svc-m4j8x May 11 16:07:03.010: INFO: Got endpoints: latency-svc-m4j8x [1.429006699s] May 11 16:07:03.136: INFO: Created: latency-svc-grpxv May 11 16:07:03.167: INFO: Got endpoints: latency-svc-grpxv [1.585171134s] May 11 16:07:03.640: INFO: Created: latency-svc-bnwng May 11 16:07:03.694: INFO: Got endpoints: latency-svc-bnwng [2.112704931s] May 11 16:07:03.932: INFO: Created: latency-svc-xd6kv May 11 16:07:04.006: INFO: Got endpoints: latency-svc-xd6kv [2.424467729s] May 11 16:07:04.136: INFO: Created: latency-svc-pndmh May 11 16:07:04.458: INFO: Got endpoints: latency-svc-pndmh [2.876679952s] May 11 16:07:04.915: INFO: Created: latency-svc-x8r6j May 11 16:07:04.919: INFO: Got endpoints: latency-svc-x8r6j [3.337252406s] May 11 16:07:05.154: INFO: Created: latency-svc-q5fjs May 11 16:07:05.382: INFO: Got endpoints: latency-svc-q5fjs [3.800651706s] May 11 16:07:05.681: INFO: Created: latency-svc-mp6ck May 11 16:07:05.688: INFO: Got endpoints: latency-svc-mp6ck [4.105939023s] May 11 16:07:05.951: INFO: Created: latency-svc-hvz2h May 11 16:07:06.035: INFO: Got endpoints: latency-svc-hvz2h [4.453265216s] May 11 16:07:06.301: INFO: Created: latency-svc-6lst7 May 11 16:07:06.525: INFO: Got endpoints: latency-svc-6lst7 [4.943427567s] May 11 16:07:07.317: INFO: Created: latency-svc-pj7k2 May 11 16:07:07.640: INFO: Got endpoints: latency-svc-pj7k2 [6.058848708s] May 11 16:07:08.004: INFO: Created: latency-svc-bvzdj May 11 16:07:08.072: INFO: Got endpoints: latency-svc-bvzdj [6.490947119s] May 11 16:07:08.076: INFO: Created: latency-svc-zd2xz May 11 16:07:08.256: INFO: Got endpoints: latency-svc-zd2xz [6.38145413s] May 11 16:07:08.324: INFO: Created: latency-svc-thtlx May 11 16:07:08.342: INFO: Got endpoints: latency-svc-thtlx [5.558551849s] May 11 16:07:08.450: INFO: Created: latency-svc-xns97 May 11 16:07:08.468: INFO: Got endpoints: latency-svc-xns97 [5.548220828s] May 11 16:07:08.498: INFO: Created: latency-svc-gmbwq May 11 16:07:08.511: INFO: Got endpoints: latency-svc-gmbwq [5.500289942s] May 11 16:07:08.638: INFO: Created: latency-svc-vgqdf May 11 16:07:08.680: INFO: Got endpoints: latency-svc-vgqdf [5.512713994s] May 11 16:07:08.777: INFO: Created: latency-svc-twlsn May 11 16:07:08.799: INFO: Got endpoints: latency-svc-twlsn [5.104295221s] May 11 16:07:08.945: INFO: Created: latency-svc-d8gcb May 11 16:07:08.968: INFO: Got endpoints: latency-svc-d8gcb [4.961756391s] May 11 16:07:09.171: INFO: Created: latency-svc-w5jsw May 11 16:07:09.183: INFO: Got endpoints: latency-svc-w5jsw [4.72442134s] May 11 16:07:09.267: INFO: Created: latency-svc-lfhvh May 11 16:07:09.387: INFO: Got endpoints: latency-svc-lfhvh [4.468379145s] May 11 16:07:09.390: INFO: Created: latency-svc-s6jwn May 11 16:07:09.417: INFO: Got endpoints: latency-svc-s6jwn [4.034861998s] May 11 16:07:09.622: INFO: Created: latency-svc-7rlpw May 11 16:07:09.683: INFO: Got endpoints: latency-svc-7rlpw [3.994958724s] May 11 16:07:09.964: INFO: Created: latency-svc-4vxvq May 11 16:07:10.150: INFO: Got endpoints: latency-svc-4vxvq [4.115539963s] May 11 16:07:10.352: INFO: Created: latency-svc-tth29 May 11 16:07:10.676: INFO: Got endpoints: latency-svc-tth29 [4.150554312s] May 11 16:07:10.981: INFO: Created: latency-svc-4skq4 May 11 16:07:11.100: INFO: Got endpoints: latency-svc-4skq4 [3.460028294s] May 11 16:07:11.857: INFO: Created: latency-svc-h48qg May 11 16:07:11.888: INFO: Got endpoints: latency-svc-h48qg [3.815499893s] May 11 16:07:11.952: INFO: Created: latency-svc-59qpd May 11 16:07:12.053: INFO: Got endpoints: latency-svc-59qpd [3.797477019s] May 11 16:07:12.226: INFO: Created: latency-svc-lbhkn May 11 16:07:12.238: INFO: Got endpoints: latency-svc-lbhkn [3.89557369s] May 11 16:07:12.295: INFO: Created: latency-svc-bnv6c May 11 16:07:12.394: INFO: Got endpoints: latency-svc-bnv6c [3.925257675s] May 11 16:07:12.452: INFO: Created: latency-svc-5jmnb May 11 16:07:12.483: INFO: Got endpoints: latency-svc-5jmnb [3.971812645s] May 11 16:07:12.537: INFO: Created: latency-svc-gpklb May 11 16:07:12.544: INFO: Got endpoints: latency-svc-gpklb [3.863937343s] May 11 16:07:12.589: INFO: Created: latency-svc-7ncgw May 11 16:07:12.627: INFO: Got endpoints: latency-svc-7ncgw [3.827875489s] May 11 16:07:12.723: INFO: Created: latency-svc-xlcwm May 11 16:07:12.726: INFO: Got endpoints: latency-svc-xlcwm [3.758144448s] May 11 16:07:12.794: INFO: Created: latency-svc-zg4k5 May 11 16:07:12.808: INFO: Got endpoints: latency-svc-zg4k5 [3.625071959s] May 11 16:07:12.886: INFO: Created: latency-svc-hzjhp May 11 16:07:12.910: INFO: Got endpoints: latency-svc-hzjhp [3.522472825s] May 11 16:07:12.961: INFO: Created: latency-svc-bs7qb May 11 16:07:13.034: INFO: Got endpoints: latency-svc-bs7qb [3.617092479s] May 11 16:07:13.244: INFO: Created: latency-svc-6tsm8 May 11 16:07:13.430: INFO: Got endpoints: latency-svc-6tsm8 [3.747439421s] May 11 16:07:13.431: INFO: Created: latency-svc-tql7q May 11 16:07:13.473: INFO: Got endpoints: latency-svc-tql7q [3.322055499s] May 11 16:07:13.640: INFO: Created: latency-svc-pjr6w May 11 16:07:13.718: INFO: Got endpoints: latency-svc-pjr6w [3.042720719s] May 11 16:07:13.909: INFO: Created: latency-svc-5grkk May 11 16:07:13.913: INFO: Got endpoints: latency-svc-5grkk [2.812811113s] May 11 16:07:14.585: INFO: Created: latency-svc-lvn68 May 11 16:07:14.993: INFO: Got endpoints: latency-svc-lvn68 [3.105252445s] May 11 16:07:14.996: INFO: Created: latency-svc-nt8wz May 11 16:07:15.007: INFO: Got endpoints: latency-svc-nt8wz [2.953402154s] May 11 16:07:15.185: INFO: Created: latency-svc-68m6g May 11 16:07:15.335: INFO: Got endpoints: latency-svc-68m6g [3.096718426s] May 11 16:07:15.335: INFO: Created: latency-svc-c6ljk May 11 16:07:15.550: INFO: Got endpoints: latency-svc-c6ljk [3.156092991s] May 11 16:07:15.763: INFO: Created: latency-svc-fvqgz May 11 16:07:15.915: INFO: Got endpoints: latency-svc-fvqgz [3.43238316s] May 11 16:07:15.976: INFO: Created: latency-svc-24f4r May 11 16:07:15.992: INFO: Got endpoints: latency-svc-24f4r [3.448898065s] May 11 16:07:16.096: INFO: Created: latency-svc-m76hp May 11 16:07:16.145: INFO: Got endpoints: latency-svc-m76hp [3.5178206s] May 11 16:07:16.196: INFO: Created: latency-svc-dbqnz May 11 16:07:16.223: INFO: Created: latency-svc-gbt64 May 11 16:07:16.223: INFO: Got endpoints: latency-svc-dbqnz [3.496632626s] May 11 16:07:16.239: INFO: Got endpoints: latency-svc-gbt64 [3.430961927s] May 11 16:07:16.282: INFO: Created: latency-svc-fjxff May 11 16:07:16.370: INFO: Got endpoints: latency-svc-fjxff [3.459899716s] May 11 16:07:16.371: INFO: Created: latency-svc-zqrc6 May 11 16:07:16.383: INFO: Got endpoints: latency-svc-zqrc6 [3.348999144s] May 11 16:07:16.414: INFO: Created: latency-svc-hv8jf May 11 16:07:16.468: INFO: Got endpoints: latency-svc-hv8jf [3.037612476s] May 11 16:07:16.561: INFO: Created: latency-svc-c6kwm May 11 16:07:16.595: INFO: Got endpoints: latency-svc-c6kwm [3.121740911s] May 11 16:07:16.595: INFO: Created: latency-svc-hbr4k May 11 16:07:16.648: INFO: Got endpoints: latency-svc-hbr4k [2.929326642s] May 11 16:07:16.726: INFO: Created: latency-svc-qlznr May 11 16:07:16.746: INFO: Got endpoints: latency-svc-qlznr [2.832369475s] May 11 16:07:16.769: INFO: Created: latency-svc-nbnp6 May 11 16:07:16.792: INFO: Got endpoints: latency-svc-nbnp6 [1.799171211s] May 11 16:07:16.860: INFO: Created: latency-svc-cwws2 May 11 16:07:16.888: INFO: Got endpoints: latency-svc-cwws2 [1.881375413s] May 11 16:07:16.889: INFO: Created: latency-svc-vnzsb May 11 16:07:16.930: INFO: Got endpoints: latency-svc-vnzsb [1.594903173s] May 11 16:07:17.034: INFO: Created: latency-svc-vphkw May 11 16:07:17.046: INFO: Got endpoints: latency-svc-vphkw [1.496389395s] May 11 16:07:17.074: INFO: Created: latency-svc-z8xpc May 11 16:07:17.089: INFO: Got endpoints: latency-svc-z8xpc [1.173929245s] May 11 16:07:17.166: INFO: Created: latency-svc-c2nhj May 11 16:07:17.179: INFO: Got endpoints: latency-svc-c2nhj [1.186453497s] May 11 16:07:17.243: INFO: Created: latency-svc-cqs8x May 11 16:07:17.246: INFO: Got endpoints: latency-svc-cqs8x [1.100962028s] May 11 16:07:17.410: INFO: Created: latency-svc-wwbvf May 11 16:07:17.431: INFO: Got endpoints: latency-svc-wwbvf [1.208682748s] May 11 16:07:17.453: INFO: Created: latency-svc-j8w4z May 11 16:07:17.468: INFO: Got endpoints: latency-svc-j8w4z [1.228684085s] May 11 16:07:17.494: INFO: Created: latency-svc-6zlw2 May 11 16:07:17.549: INFO: Got endpoints: latency-svc-6zlw2 [1.179614396s] May 11 16:07:17.579: INFO: Created: latency-svc-28774 May 11 16:07:17.594: INFO: Got endpoints: latency-svc-28774 [1.210831819s] May 11 16:07:17.621: INFO: Created: latency-svc-bqnq4 May 11 16:07:17.686: INFO: Got endpoints: latency-svc-bqnq4 [1.218331293s] May 11 16:07:17.728: INFO: Created: latency-svc-2ghfr May 11 16:07:17.746: INFO: Got endpoints: latency-svc-2ghfr [1.151782639s] May 11 16:07:17.855: INFO: Created: latency-svc-kmwz7 May 11 16:07:17.873: INFO: Got endpoints: latency-svc-kmwz7 [1.224908214s] May 11 16:07:17.902: INFO: Created: latency-svc-btttg May 11 16:07:17.921: INFO: Got endpoints: latency-svc-btttg [1.175114028s] May 11 16:07:17.986: INFO: Created: latency-svc-96z2n May 11 16:07:18.005: INFO: Got endpoints: latency-svc-96z2n [1.212900695s] May 11 16:07:18.038: INFO: Created: latency-svc-lgm5s May 11 16:07:18.057: INFO: Got endpoints: latency-svc-lgm5s [1.169134168s] May 11 16:07:18.120: INFO: Created: latency-svc-n5btj May 11 16:07:18.126: INFO: Got endpoints: latency-svc-n5btj [1.196045555s] May 11 16:07:18.166: INFO: Created: latency-svc-7bj57 May 11 16:07:18.181: INFO: Got endpoints: latency-svc-7bj57 [1.134825356s] May 11 16:07:18.214: INFO: Created: latency-svc-ndk24 May 11 16:07:18.268: INFO: Got endpoints: latency-svc-ndk24 [1.178684544s] May 11 16:07:18.273: INFO: Created: latency-svc-5k9n5 May 11 16:07:18.297: INFO: Got endpoints: latency-svc-5k9n5 [1.118128489s] May 11 16:07:18.327: INFO: Created: latency-svc-wx9n6 May 11 16:07:18.418: INFO: Got endpoints: latency-svc-wx9n6 [1.172202272s] May 11 16:07:18.420: INFO: Created: latency-svc-9g4sp May 11 16:07:18.835: INFO: Got endpoints: latency-svc-9g4sp [1.404052187s] May 11 16:07:19.167: INFO: Created: latency-svc-2c2jf May 11 16:07:19.171: INFO: Got endpoints: latency-svc-2c2jf [1.703152147s] May 11 16:07:19.316: INFO: Created: latency-svc-kj4fw May 11 16:07:19.351: INFO: Got endpoints: latency-svc-kj4fw [1.801493657s] May 11 16:07:19.700: INFO: Created: latency-svc-f5m8g May 11 16:07:19.831: INFO: Got endpoints: latency-svc-f5m8g [2.236223869s] May 11 16:07:19.834: INFO: Created: latency-svc-fc68v May 11 16:07:19.842: INFO: Got endpoints: latency-svc-fc68v [2.156182182s] May 11 16:07:19.871: INFO: Created: latency-svc-nwb94 May 11 16:07:19.879: INFO: Got endpoints: latency-svc-nwb94 [2.132674904s] May 11 16:07:19.913: INFO: Created: latency-svc-x74bz May 11 16:07:19.927: INFO: Got endpoints: latency-svc-x74bz [2.054501151s] May 11 16:07:19.998: INFO: Created: latency-svc-rwgc2 May 11 16:07:20.030: INFO: Got endpoints: latency-svc-rwgc2 [2.109348295s] May 11 16:07:20.106: INFO: Created: latency-svc-zw866 May 11 16:07:20.116: INFO: Got endpoints: latency-svc-zw866 [2.110494399s] May 11 16:07:20.159: INFO: Created: latency-svc-tnslh May 11 16:07:20.180: INFO: Got endpoints: latency-svc-tnslh [2.123114985s] May 11 16:07:20.256: INFO: Created: latency-svc-dm9n7 May 11 16:07:20.259: INFO: Got endpoints: latency-svc-dm9n7 [2.132578105s] May 11 16:07:20.339: INFO: Created: latency-svc-kjl5t May 11 16:07:20.423: INFO: Got endpoints: latency-svc-kjl5t [2.242099883s] May 11 16:07:20.428: INFO: Created: latency-svc-79vg8 May 11 16:07:20.470: INFO: Got endpoints: latency-svc-79vg8 [2.20240543s] May 11 16:07:20.615: INFO: Created: latency-svc-stgz4 May 11 16:07:20.619: INFO: Got endpoints: latency-svc-stgz4 [2.321638607s] May 11 16:07:20.783: INFO: Created: latency-svc-jxmww May 11 16:07:20.830: INFO: Got endpoints: latency-svc-jxmww [2.411811952s] May 11 16:07:20.974: INFO: Created: latency-svc-sv7vx May 11 16:07:20.984: INFO: Got endpoints: latency-svc-sv7vx [2.148785378s] May 11 16:07:21.118: INFO: Created: latency-svc-7pgzr May 11 16:07:21.138: INFO: Got endpoints: latency-svc-7pgzr [1.967060589s] May 11 16:07:21.166: INFO: Created: latency-svc-pg8rm May 11 16:07:21.185: INFO: Got endpoints: latency-svc-pg8rm [1.833789564s] May 11 16:07:21.239: INFO: Created: latency-svc-zm5xl May 11 16:07:21.256: INFO: Got endpoints: latency-svc-zm5xl [1.425678671s] May 11 16:07:21.401: INFO: Created: latency-svc-tnt54 May 11 16:07:21.412: INFO: Got endpoints: latency-svc-tnt54 [1.56977455s] May 11 16:07:21.467: INFO: Created: latency-svc-nh5n4 May 11 16:07:21.485: INFO: Got endpoints: latency-svc-nh5n4 [1.606109103s] May 11 16:07:21.568: INFO: Created: latency-svc-qw42m May 11 16:07:21.575: INFO: Got endpoints: latency-svc-qw42m [1.647733635s] May 11 16:07:21.612: INFO: Created: latency-svc-fkfxn May 11 16:07:21.640: INFO: Got endpoints: latency-svc-fkfxn [1.609952312s] May 11 16:07:21.741: INFO: Created: latency-svc-6swbz May 11 16:07:21.756: INFO: Got endpoints: latency-svc-6swbz [1.640069744s] May 11 16:07:21.826: INFO: Created: latency-svc-2czqj May 11 16:07:21.890: INFO: Got endpoints: latency-svc-2czqj [1.710128625s] May 11 16:07:21.895: INFO: Created: latency-svc-cn8wl May 11 16:07:21.924: INFO: Got endpoints: latency-svc-cn8wl [1.665841839s] May 11 16:07:21.971: INFO: Created: latency-svc-vgj7p May 11 16:07:21.979: INFO: Got endpoints: latency-svc-vgj7p [1.555372703s] May 11 16:07:22.047: INFO: Created: latency-svc-wn48j May 11 16:07:22.100: INFO: Got endpoints: latency-svc-wn48j [1.629243453s] May 11 16:07:22.208: INFO: Created: latency-svc-6rdqr May 11 16:07:22.263: INFO: Got endpoints: latency-svc-6rdqr [1.64369772s] May 11 16:07:22.358: INFO: Created: latency-svc-5x5rk May 11 16:07:22.361: INFO: Got endpoints: latency-svc-5x5rk [1.531393486s] May 11 16:07:22.432: INFO: Created: latency-svc-fqkkd May 11 16:07:22.454: INFO: Got endpoints: latency-svc-fqkkd [1.469709982s] May 11 16:07:22.541: INFO: Created: latency-svc-ldjpx May 11 16:07:22.568: INFO: Got endpoints: latency-svc-ldjpx [1.430372013s] May 11 16:07:22.657: INFO: Created: latency-svc-87n8t May 11 16:07:22.664: INFO: Got endpoints: latency-svc-87n8t [1.479609553s] May 11 16:07:22.843: INFO: Created: latency-svc-p45bb May 11 16:07:22.875: INFO: Got endpoints: latency-svc-p45bb [1.618809401s] May 11 16:07:22.914: INFO: Created: latency-svc-mst4j May 11 16:07:22.935: INFO: Got endpoints: latency-svc-mst4j [1.522433993s] May 11 16:07:23.011: INFO: Created: latency-svc-6cx6n May 11 16:07:23.049: INFO: Got endpoints: latency-svc-6cx6n [1.563884193s] May 11 16:07:23.142: INFO: Created: latency-svc-7jr4v May 11 16:07:23.163: INFO: Got endpoints: latency-svc-7jr4v [1.588092593s] May 11 16:07:23.342: INFO: Created: latency-svc-sc6nj May 11 16:07:23.357: INFO: Got endpoints: latency-svc-sc6nj [1.716937689s] May 11 16:07:23.460: INFO: Created: latency-svc-vndsf May 11 16:07:23.476: INFO: Got endpoints: latency-svc-vndsf [1.719766799s] May 11 16:07:23.534: INFO: Created: latency-svc-lvr8h May 11 16:07:23.633: INFO: Got endpoints: latency-svc-lvr8h [1.74257941s] May 11 16:07:23.636: INFO: Created: latency-svc-89g57 May 11 16:07:23.650: INFO: Got endpoints: latency-svc-89g57 [1.725817951s] May 11 16:07:23.713: INFO: Created: latency-svc-znx2l May 11 16:07:23.764: INFO: Got endpoints: latency-svc-znx2l [1.785761161s] May 11 16:07:23.785: INFO: Created: latency-svc-jg9w5 May 11 16:07:23.813: INFO: Got endpoints: latency-svc-jg9w5 [1.713797448s] May 11 16:07:23.841: INFO: Created: latency-svc-rqsk9 May 11 16:07:23.862: INFO: Got endpoints: latency-svc-rqsk9 [1.599247151s] May 11 16:07:23.916: INFO: Created: latency-svc-2bpxx May 11 16:07:23.928: INFO: Got endpoints: latency-svc-2bpxx [1.566685314s] May 11 16:07:23.966: INFO: Created: latency-svc-jjgnq May 11 16:07:23.988: INFO: Got endpoints: latency-svc-jjgnq [1.534040696s] May 11 16:07:24.088: INFO: Created: latency-svc-cqmm7 May 11 16:07:24.092: INFO: Got endpoints: latency-svc-cqmm7 [1.523109836s] May 11 16:07:24.160: INFO: Created: latency-svc-5f74j May 11 16:07:24.256: INFO: Got endpoints: latency-svc-5f74j [1.59149253s] May 11 16:07:24.297: INFO: Created: latency-svc-svlgg May 11 16:07:24.338: INFO: Got endpoints: latency-svc-svlgg [1.462810298s] May 11 16:07:24.465: INFO: Created: latency-svc-vd54k May 11 16:07:24.668: INFO: Got endpoints: latency-svc-vd54k [1.732846301s] May 11 16:07:24.670: INFO: Created: latency-svc-85vll May 11 16:07:24.807: INFO: Got endpoints: latency-svc-85vll [1.757503592s] May 11 16:07:24.826: INFO: Created: latency-svc-dw8mn May 11 16:07:24.866: INFO: Got endpoints: latency-svc-dw8mn [1.702591404s] May 11 16:07:25.340: INFO: Created: latency-svc-vpctx May 11 16:07:25.350: INFO: Got endpoints: latency-svc-vpctx [1.993172425s] May 11 16:07:25.586: INFO: Created: latency-svc-b9pg6 May 11 16:07:25.632: INFO: Got endpoints: latency-svc-b9pg6 [2.15650778s] May 11 16:07:25.753: INFO: Created: latency-svc-bjtld May 11 16:07:25.821: INFO: Got endpoints: latency-svc-bjtld [2.187707547s] May 11 16:07:25.924: INFO: Created: latency-svc-qgt8d May 11 16:07:25.993: INFO: Got endpoints: latency-svc-qgt8d [2.343020018s] May 11 16:07:26.106: INFO: Created: latency-svc-wz8db May 11 16:07:26.168: INFO: Got endpoints: latency-svc-wz8db [2.403204181s] May 11 16:07:26.322: INFO: Created: latency-svc-qjr68 May 11 16:07:26.325: INFO: Got endpoints: latency-svc-qjr68 [2.511332606s] May 11 16:07:26.544: INFO: Created: latency-svc-mwcpt May 11 16:07:26.548: INFO: Got endpoints: latency-svc-mwcpt [2.686053862s] May 11 16:07:26.747: INFO: Created: latency-svc-5hvf8 May 11 16:07:26.757: INFO: Got endpoints: latency-svc-5hvf8 [2.829523972s] May 11 16:07:26.922: INFO: Created: latency-svc-5slhx May 11 16:07:26.925: INFO: Got endpoints: latency-svc-5slhx [2.936813005s] May 11 16:07:26.981: INFO: Created: latency-svc-l5nj4 May 11 16:07:27.014: INFO: Got endpoints: latency-svc-l5nj4 [2.922394983s] May 11 16:07:27.095: INFO: Created: latency-svc-xsw27 May 11 16:07:27.128: INFO: Got endpoints: latency-svc-xsw27 [2.871979366s] May 11 16:07:27.192: INFO: Created: latency-svc-86785 May 11 16:07:27.268: INFO: Got endpoints: latency-svc-86785 [2.929824696s] May 11 16:07:27.298: INFO: Created: latency-svc-x8l7w May 11 16:07:27.333: INFO: Got endpoints: latency-svc-x8l7w [2.66477228s] May 11 16:07:27.472: INFO: Created: latency-svc-52kl7 May 11 16:07:27.513: INFO: Got endpoints: latency-svc-52kl7 [2.706365132s] May 11 16:07:27.514: INFO: Created: latency-svc-qrj4n May 11 16:07:27.537: INFO: Got endpoints: latency-svc-qrj4n [2.671421775s] May 11 16:07:27.633: INFO: Created: latency-svc-rvmxj May 11 16:07:27.636: INFO: Got endpoints: latency-svc-rvmxj [2.285550876s] May 11 16:07:27.783: INFO: Created: latency-svc-wtgmx May 11 16:07:27.802: INFO: Got endpoints: latency-svc-wtgmx [2.169333851s] May 11 16:07:27.838: INFO: Created: latency-svc-7qmsr May 11 16:07:27.851: INFO: Got endpoints: latency-svc-7qmsr [2.029678423s] May 11 16:07:27.946: INFO: Created: latency-svc-cwqrn May 11 16:07:27.964: INFO: Got endpoints: latency-svc-cwqrn [1.97065097s] May 11 16:07:27.999: INFO: Created: latency-svc-pr5k4 May 11 16:07:28.006: INFO: Got endpoints: latency-svc-pr5k4 [1.838308447s] May 11 16:07:28.064: INFO: Created: latency-svc-zwmrq May 11 16:07:28.078: INFO: Got endpoints: latency-svc-zwmrq [1.752751257s] May 11 16:07:28.107: INFO: Created: latency-svc-w768z May 11 16:07:28.122: INFO: Got endpoints: latency-svc-w768z [1.573554542s] May 11 16:07:28.143: INFO: Created: latency-svc-bmtd5 May 11 16:07:28.238: INFO: Got endpoints: latency-svc-bmtd5 [1.480650695s] May 11 16:07:28.239: INFO: Created: latency-svc-lphkv May 11 16:07:28.263: INFO: Got endpoints: latency-svc-lphkv [1.337771788s] May 11 16:07:28.317: INFO: Created: latency-svc-z4pfb May 11 16:07:28.376: INFO: Got endpoints: latency-svc-z4pfb [1.361534392s] May 11 16:07:28.389: INFO: Created: latency-svc-s9wq6 May 11 16:07:28.405: INFO: Got endpoints: latency-svc-s9wq6 [1.276745771s] May 11 16:07:28.425: INFO: Created: latency-svc-7nxtd May 11 16:07:28.461: INFO: Got endpoints: latency-svc-7nxtd [1.193235313s] May 11 16:07:28.519: INFO: Created: latency-svc-gssvj May 11 16:07:28.522: INFO: Got endpoints: latency-svc-gssvj [1.18970271s] May 11 16:07:28.546: INFO: Created: latency-svc-l6hvf May 11 16:07:28.556: INFO: Got endpoints: latency-svc-l6hvf [1.042641939s] May 11 16:07:28.599: INFO: Created: latency-svc-s7ftj May 11 16:07:28.669: INFO: Got endpoints: latency-svc-s7ftj [1.131645481s] May 11 16:07:28.695: INFO: Created: latency-svc-tn65z May 11 16:07:28.713: INFO: Got endpoints: latency-svc-tn65z [1.07653903s] May 11 16:07:28.749: INFO: Created: latency-svc-dfrlr May 11 16:07:28.761: INFO: Got endpoints: latency-svc-dfrlr [958.924041ms] May 11 16:07:28.807: INFO: Created: latency-svc-ksjdz May 11 16:07:28.822: INFO: Got endpoints: latency-svc-ksjdz [971.308381ms] May 11 16:07:28.853: INFO: Created: latency-svc-sj9mv May 11 16:07:28.870: INFO: Got endpoints: latency-svc-sj9mv [906.0202ms] May 11 16:07:28.895: INFO: Created: latency-svc-h8mpp May 11 16:07:28.906: INFO: Got endpoints: latency-svc-h8mpp [900.026505ms] May 11 16:07:28.951: INFO: Created: latency-svc-wxxb5 May 11 16:07:28.960: INFO: Got endpoints: latency-svc-wxxb5 [882.751901ms] May 11 16:07:28.998: INFO: Created: latency-svc-6cnbg May 11 16:07:29.009: INFO: Got endpoints: latency-svc-6cnbg [886.943768ms] May 11 16:07:29.029: INFO: Created: latency-svc-pqrvw May 11 16:07:29.039: INFO: Got endpoints: latency-svc-pqrvw [800.918002ms] May 11 16:07:29.088: INFO: Created: latency-svc-h2chs May 11 16:07:29.113: INFO: Got endpoints: latency-svc-h2chs [849.535101ms] May 11 16:07:29.160: INFO: Created: latency-svc-44sf6 May 11 16:07:29.172: INFO: Got endpoints: latency-svc-44sf6 [796.355611ms] May 11 16:07:29.226: INFO: Created: latency-svc-8kddl May 11 16:07:29.229: INFO: Got endpoints: latency-svc-8kddl [824.218288ms] May 11 16:07:29.293: INFO: Created: latency-svc-f5rjg May 11 16:07:29.311: INFO: Got endpoints: latency-svc-f5rjg [849.751495ms] May 11 16:07:29.405: INFO: Created: latency-svc-tngvm May 11 16:07:29.408: INFO: Got endpoints: latency-svc-tngvm [885.839248ms] May 11 16:07:29.486: INFO: Created: latency-svc-jcxsp May 11 16:07:29.504: INFO: Got endpoints: latency-svc-jcxsp [948.087539ms] May 11 16:07:29.575: INFO: Created: latency-svc-wfnj6 May 11 16:07:29.594: INFO: Got endpoints: latency-svc-wfnj6 [924.697171ms] May 11 16:07:29.618: INFO: Created: latency-svc-gkjxt May 11 16:07:29.637: INFO: Got endpoints: latency-svc-gkjxt [923.749182ms] May 11 16:07:29.710: INFO: Created: latency-svc-9kbzv May 11 16:07:29.716: INFO: Got endpoints: latency-svc-9kbzv [955.436253ms] May 11 16:07:29.799: INFO: Created: latency-svc-6nzcd May 11 16:07:29.848: INFO: Got endpoints: latency-svc-6nzcd [1.026272865s] May 11 16:07:29.889: INFO: Created: latency-svc-p4d4c May 11 16:07:29.919: INFO: Got endpoints: latency-svc-p4d4c [1.048961526s] May 11 16:07:30.018: INFO: Created: latency-svc-tpxct May 11 16:07:30.027: INFO: Got endpoints: latency-svc-tpxct [1.120481015s] May 11 16:07:30.087: INFO: Created: latency-svc-kh54d May 11 16:07:30.106: INFO: Got endpoints: latency-svc-kh54d [1.145738651s] May 11 16:07:30.160: INFO: Created: latency-svc-s68rs May 11 16:07:30.172: INFO: Got endpoints: latency-svc-s68rs [1.163103947s] May 11 16:07:30.201: INFO: Created: latency-svc-26mft May 11 16:07:30.215: INFO: Got endpoints: latency-svc-26mft [1.175860813s] May 11 16:07:30.237: INFO: Created: latency-svc-f8htw May 11 16:07:30.258: INFO: Got endpoints: latency-svc-f8htw [1.145081848s] May 11 16:07:30.356: INFO: Created: latency-svc-vj8td May 11 16:07:30.372: INFO: Got endpoints: latency-svc-vj8td [1.199408826s] May 11 16:07:30.393: INFO: Created: latency-svc-txvcj May 11 16:07:30.414: INFO: Got endpoints: latency-svc-txvcj [1.185356682s] May 11 16:07:30.555: INFO: Created: latency-svc-9ljht May 11 16:07:30.570: INFO: Got endpoints: latency-svc-9ljht [1.259009308s] May 11 16:07:30.629: INFO: Created: latency-svc-27d5q May 11 16:07:30.642: INFO: Got endpoints: latency-svc-27d5q [1.234025142s] May 11 16:07:30.711: INFO: Created: latency-svc-5j2qw May 11 16:07:30.727: INFO: Got endpoints: latency-svc-5j2qw [1.223241334s] May 11 16:07:30.766: INFO: Created: latency-svc-rkrlt May 11 16:07:30.781: INFO: Got endpoints: latency-svc-rkrlt [1.187209972s] May 11 16:07:30.844: INFO: Created: latency-svc-qgtvw May 11 16:07:30.853: INFO: Got endpoints: latency-svc-qgtvw [1.216567404s] May 11 16:07:30.885: INFO: Created: latency-svc-m9hg6 May 11 16:07:30.902: INFO: Got endpoints: latency-svc-m9hg6 [1.1857842s] May 11 16:07:31.031: INFO: Created: latency-svc-nzrvr May 11 16:07:31.034: INFO: Got endpoints: latency-svc-nzrvr [1.1855347s] May 11 16:07:31.096: INFO: Created: latency-svc-cmvhn May 11 16:07:31.280: INFO: Got endpoints: latency-svc-cmvhn [1.360965296s] May 11 16:07:31.919: INFO: Created: latency-svc-r8nbm May 11 16:07:31.958: INFO: Got endpoints: latency-svc-r8nbm [1.931211123s] May 11 16:07:32.095: INFO: Created: latency-svc-6k7jp May 11 16:07:32.183: INFO: Got endpoints: latency-svc-6k7jp [2.076505383s] May 11 16:07:32.724: INFO: Created: latency-svc-q6kv5 May 11 16:07:32.785: INFO: Got endpoints: latency-svc-q6kv5 [2.613519747s] May 11 16:07:32.786: INFO: Latencies: [292.652794ms 796.355611ms 800.918002ms 824.218288ms 849.535101ms 849.751495ms 882.751901ms 885.839248ms 886.943768ms 900.026505ms 906.0202ms 923.749182ms 924.697171ms 948.087539ms 955.436253ms 958.924041ms 971.308381ms 1.026272865s 1.042641939s 1.048961526s 1.07653903s 1.100962028s 1.118128489s 1.120481015s 1.131645481s 1.134825356s 1.145081848s 1.145738651s 1.151782639s 1.163103947s 1.169134168s 1.172202272s 1.173929245s 1.175114028s 1.175860813s 1.178684544s 1.179614396s 1.185356682s 1.1855347s 1.1857842s 1.186453497s 1.187209972s 1.18970271s 1.193235313s 1.196045555s 1.199408826s 1.202495707s 1.208682748s 1.210831819s 1.212900695s 1.216567404s 1.218331293s 1.223241334s 1.224908214s 1.228684085s 1.234025142s 1.259009308s 1.276745771s 1.337771788s 1.33871175s 1.360965296s 1.361534392s 1.404052187s 1.425678671s 1.429006699s 1.430372013s 1.462810298s 1.469709982s 1.479609553s 1.480650695s 1.496389395s 1.522433993s 1.523109836s 1.531393486s 1.534040696s 1.555372703s 1.563884193s 1.566685314s 1.56977455s 1.573554542s 1.585171134s 1.588092593s 1.59149253s 1.594903173s 1.599247151s 1.606109103s 1.609952312s 1.618809401s 1.629243453s 1.640069744s 1.64369772s 1.647733635s 1.665841839s 1.702591404s 1.703152147s 1.710128625s 1.713797448s 1.716937689s 1.719766799s 1.725817951s 1.732846301s 1.74257941s 1.752751257s 1.757503592s 1.785761161s 1.799171211s 1.801493657s 1.833789564s 1.838308447s 1.881375413s 1.931211123s 1.967060589s 1.97065097s 1.993172425s 2.029678423s 2.054501151s 2.076505383s 2.109348295s 2.110494399s 2.112704931s 2.123114985s 2.132578105s 2.132674904s 2.148785378s 2.156182182s 2.15650778s 2.169333851s 2.187707547s 2.20240543s 2.236223869s 2.242099883s 2.285550876s 2.321638607s 2.343020018s 2.403204181s 2.411811952s 2.424467729s 2.511332606s 2.613519747s 2.66477228s 2.671421775s 2.686053862s 2.706365132s 2.812811113s 2.829523972s 2.832369475s 2.871979366s 2.876679952s 2.922394983s 2.929326642s 2.929824696s 2.936813005s 2.953402154s 3.037612476s 3.042720719s 3.096718426s 3.105252445s 3.121740911s 3.156092991s 3.322055499s 3.337252406s 3.348999144s 3.430961927s 3.43238316s 3.448898065s 3.459899716s 3.460028294s 3.496632626s 3.5178206s 3.522472825s 3.617092479s 3.625071959s 3.747439421s 3.758144448s 3.797477019s 3.800651706s 3.815499893s 3.827875489s 3.863937343s 3.89557369s 3.925257675s 3.971812645s 3.994958724s 4.034861998s 4.105939023s 4.115539963s 4.150554312s 4.453265216s 4.468379145s 4.72442134s 4.943427567s 4.961756391s 5.104295221s 5.500289942s 5.512713994s 5.548220828s 5.558551849s 6.058848708s 6.38145413s 6.490947119s] May 11 16:07:32.786: INFO: 50 %ile: 1.732846301s May 11 16:07:32.786: INFO: 90 %ile: 3.925257675s May 11 16:07:32.786: INFO: 99 %ile: 6.38145413s May 11 16:07:32.786: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:07:32.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7219" for this suite. • [SLOW TEST:38.686 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":119,"skipped":2057,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:07:33.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-2f51bfe8-1b30-4fc2-8b86-4a05f616b273 STEP: Creating a pod to test consume secrets May 11 16:07:33.493: INFO: Waiting up to 5m0s for pod "pod-secrets-3fc4de1f-de3d-404a-8054-d11986602d51" in namespace "secrets-5012" to be "success or failure" May 11 16:07:33.528: INFO: Pod "pod-secrets-3fc4de1f-de3d-404a-8054-d11986602d51": Phase="Pending", Reason="", readiness=false. Elapsed: 34.465018ms May 11 16:07:35.532: INFO: Pod "pod-secrets-3fc4de1f-de3d-404a-8054-d11986602d51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039119083s May 11 16:07:37.564: INFO: Pod "pod-secrets-3fc4de1f-de3d-404a-8054-d11986602d51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070334055s May 11 16:07:40.067: INFO: Pod "pod-secrets-3fc4de1f-de3d-404a-8054-d11986602d51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573455805s May 11 16:07:42.386: INFO: Pod "pod-secrets-3fc4de1f-de3d-404a-8054-d11986602d51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.892279196s STEP: Saw pod success May 11 16:07:42.386: INFO: Pod "pod-secrets-3fc4de1f-de3d-404a-8054-d11986602d51" satisfied condition "success or failure" May 11 16:07:42.631: INFO: Trying to get logs from node jerma-worker pod pod-secrets-3fc4de1f-de3d-404a-8054-d11986602d51 container secret-volume-test: STEP: delete the pod May 11 16:07:43.899: INFO: Waiting for pod pod-secrets-3fc4de1f-de3d-404a-8054-d11986602d51 to disappear May 11 16:07:44.034: INFO: Pod pod-secrets-3fc4de1f-de3d-404a-8054-d11986602d51 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:07:44.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5012" for this suite. • [SLOW TEST:11.240 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2063,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:07:44.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:08:13.852: INFO: Container started at 2020-05-11 16:07:49 +0000 UTC, pod became ready at 2020-05-11 16:08:12 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:08:13.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4187" for this suite. • [SLOW TEST:29.676 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2067,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:08:13.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-b6eb98a2-c8e8-4950-8569-2c84827d950a STEP: Creating a pod to test consume configMaps May 11 16:08:14.212: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b" in namespace "projected-8815" to be "success or failure" May 11 16:08:14.253: INFO: Pod "pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.926884ms May 11 16:08:16.388: INFO: Pod "pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175240394s May 11 16:08:18.514: INFO: Pod "pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302156585s May 11 16:08:20.575: INFO: Pod "pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362966438s May 11 16:08:22.772: INFO: Pod "pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559493509s May 11 16:08:24.883: INFO: Pod "pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.671190385s STEP: Saw pod success May 11 16:08:24.884: INFO: Pod "pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b" satisfied condition "success or failure" May 11 16:08:24.887: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b container projected-configmap-volume-test: STEP: delete the pod May 11 16:08:26.538: INFO: Waiting for pod pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b to disappear May 11 16:08:26.541: INFO: Pod pod-projected-configmaps-6047f223-2bc5-42e6-b2ea-e99aabc96a2b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:08:26.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8815" for this suite. • [SLOW TEST:13.157 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2067,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:08:27.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-9853b56d-8a73-4bda-9b86-9cc3e1f64a5a STEP: Creating a pod to test consume configMaps May 11 16:08:28.214: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f65a6082-52e8-4788-b282-6a643a1eb7cb" in namespace "projected-7008" to be "success or failure" May 11 16:08:28.385: INFO: Pod "pod-projected-configmaps-f65a6082-52e8-4788-b282-6a643a1eb7cb": Phase="Pending", Reason="", readiness=false. Elapsed: 171.004478ms May 11 16:08:30.678: INFO: Pod "pod-projected-configmaps-f65a6082-52e8-4788-b282-6a643a1eb7cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463585526s May 11 16:08:32.727: INFO: Pod "pod-projected-configmaps-f65a6082-52e8-4788-b282-6a643a1eb7cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.512443149s May 11 16:08:35.055: INFO: Pod "pod-projected-configmaps-f65a6082-52e8-4788-b282-6a643a1eb7cb": Phase="Running", Reason="", readiness=true. Elapsed: 6.840453176s May 11 16:08:37.090: INFO: Pod "pod-projected-configmaps-f65a6082-52e8-4788-b282-6a643a1eb7cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.876269954s STEP: Saw pod success May 11 16:08:37.091: INFO: Pod "pod-projected-configmaps-f65a6082-52e8-4788-b282-6a643a1eb7cb" satisfied condition "success or failure" May 11 16:08:37.093: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f65a6082-52e8-4788-b282-6a643a1eb7cb container projected-configmap-volume-test: STEP: delete the pod May 11 16:08:38.620: INFO: Waiting for pod pod-projected-configmaps-f65a6082-52e8-4788-b282-6a643a1eb7cb to disappear May 11 16:08:39.347: INFO: Pod pod-projected-configmaps-f65a6082-52e8-4788-b282-6a643a1eb7cb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:08:39.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7008" for this suite. • [SLOW TEST:12.555 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2067,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:08:39.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 11 16:08:42.009: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 11 16:08:54.330: INFO: >>> kubeConfig: /root/.kube/config May 11 16:08:56.257: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:09:08.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3186" for this suite. • [SLOW TEST:29.435 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":124,"skipped":2073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:09:09.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 11 16:09:10.066: INFO: Waiting up to 5m0s for pod "downward-api-3f232129-adac-4872-8af4-38b5aa87991f" in namespace "downward-api-4190" to be "success or failure" May 11 16:09:10.216: INFO: Pod "downward-api-3f232129-adac-4872-8af4-38b5aa87991f": Phase="Pending", Reason="", readiness=false. Elapsed: 149.378225ms May 11 16:09:12.492: INFO: Pod "downward-api-3f232129-adac-4872-8af4-38b5aa87991f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.425387159s May 11 16:09:14.597: INFO: Pod "downward-api-3f232129-adac-4872-8af4-38b5aa87991f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.530638493s May 11 16:09:16.911: INFO: Pod "downward-api-3f232129-adac-4872-8af4-38b5aa87991f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.844534888s May 11 16:09:18.915: INFO: Pod "downward-api-3f232129-adac-4872-8af4-38b5aa87991f": Phase="Running", Reason="", readiness=true. Elapsed: 8.848812514s May 11 16:09:20.919: INFO: Pod "downward-api-3f232129-adac-4872-8af4-38b5aa87991f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.852534807s STEP: Saw pod success May 11 16:09:20.919: INFO: Pod "downward-api-3f232129-adac-4872-8af4-38b5aa87991f" satisfied condition "success or failure" May 11 16:09:20.921: INFO: Trying to get logs from node jerma-worker2 pod downward-api-3f232129-adac-4872-8af4-38b5aa87991f container dapi-container: STEP: delete the pod May 11 16:09:21.497: INFO: Waiting for pod downward-api-3f232129-adac-4872-8af4-38b5aa87991f to disappear May 11 16:09:21.499: INFO: Pod downward-api-3f232129-adac-4872-8af4-38b5aa87991f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:09:21.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4190" for this suite. • [SLOW TEST:13.049 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2124,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:09:22.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 11 16:09:22.551: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 11 16:09:22.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4738' May 11 16:09:23.357: INFO: stderr: "" May 11 16:09:23.357: INFO: stdout: "service/agnhost-slave created\n" May 11 16:09:23.357: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 11 16:09:23.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4738' May 11 16:09:24.424: INFO: stderr: "" May 11 16:09:24.424: INFO: stdout: "service/agnhost-master created\n" May 11 16:09:24.424: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 11 16:09:24.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4738' May 11 16:09:26.786: INFO: stderr: "" May 11 16:09:26.786: INFO: stdout: "service/frontend created\n" May 11 16:09:26.786: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 11 16:09:26.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4738' May 11 16:09:27.531: INFO: stderr: "" May 11 16:09:27.531: INFO: stdout: "deployment.apps/frontend created\n" May 11 16:09:27.532: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 16:09:27.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4738' May 11 16:09:28.594: INFO: stderr: "" May 11 16:09:28.594: INFO: stdout: "deployment.apps/agnhost-master created\n" May 11 16:09:28.594: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 16:09:28.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4738' May 11 16:09:29.560: INFO: stderr: "" May 11 16:09:29.560: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 11 16:09:29.560: INFO: Waiting for all frontend pods to be Running. May 11 16:09:44.610: INFO: Waiting for frontend to serve content. May 11 16:09:44.620: INFO: Trying to add a new entry to the guestbook. May 11 16:09:44.631: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 11 16:09:44.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4738' May 11 16:09:44.796: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 16:09:44.796: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 11 16:09:44.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4738' May 11 16:09:44.979: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 16:09:44.979: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 11 16:09:44.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4738' May 11 16:09:45.196: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 16:09:45.196: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 16:09:45.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4738' May 11 16:09:45.303: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 16:09:45.303: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 16:09:45.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4738' May 11 16:09:45.405: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 16:09:45.405: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 11 16:09:45.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4738' May 11 16:09:45.519: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 16:09:45.519: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:09:45.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4738" for this suite. • [SLOW TEST:23.411 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":126,"skipped":2136,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:09:45.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-94e12e22-d6d5-4879-a698-5e091546008a STEP: Creating a pod to test consume configMaps May 11 16:09:45.732: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c9d3feaf-5640-4519-ae4a-a57aaaafe9db" in namespace "projected-422" to be "success or failure" May 11 16:09:46.151: INFO: Pod "pod-projected-configmaps-c9d3feaf-5640-4519-ae4a-a57aaaafe9db": Phase="Pending", Reason="", readiness=false. Elapsed: 419.147667ms May 11 16:09:48.155: INFO: Pod "pod-projected-configmaps-c9d3feaf-5640-4519-ae4a-a57aaaafe9db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.423165516s May 11 16:09:50.312: INFO: Pod "pod-projected-configmaps-c9d3feaf-5640-4519-ae4a-a57aaaafe9db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.580024186s May 11 16:09:52.482: INFO: Pod "pod-projected-configmaps-c9d3feaf-5640-4519-ae4a-a57aaaafe9db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.750866704s STEP: Saw pod success May 11 16:09:52.483: INFO: Pod "pod-projected-configmaps-c9d3feaf-5640-4519-ae4a-a57aaaafe9db" satisfied condition "success or failure" May 11 16:09:52.762: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-c9d3feaf-5640-4519-ae4a-a57aaaafe9db container projected-configmap-volume-test: STEP: delete the pod May 11 16:09:53.608: INFO: Waiting for pod pod-projected-configmaps-c9d3feaf-5640-4519-ae4a-a57aaaafe9db to disappear May 11 16:09:53.868: INFO: Pod pod-projected-configmaps-c9d3feaf-5640-4519-ae4a-a57aaaafe9db no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:09:53.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-422" for this suite. • [SLOW TEST:8.541 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2139,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:09:54.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 16:09:55.018: INFO: Waiting up to 5m0s for pod "pod-3fe887f0-0c85-41b6-9887-2492174c6228" in namespace "emptydir-8694" to be "success or failure" May 11 16:09:55.021: INFO: Pod "pod-3fe887f0-0c85-41b6-9887-2492174c6228": Phase="Pending", Reason="", readiness=false. Elapsed: 2.952928ms May 11 16:09:57.036: INFO: Pod "pod-3fe887f0-0c85-41b6-9887-2492174c6228": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017679589s May 11 16:09:59.082: INFO: Pod "pod-3fe887f0-0c85-41b6-9887-2492174c6228": Phase="Running", Reason="", readiness=true. Elapsed: 4.063146947s May 11 16:10:01.087: INFO: Pod "pod-3fe887f0-0c85-41b6-9887-2492174c6228": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068389112s STEP: Saw pod success May 11 16:10:01.087: INFO: Pod "pod-3fe887f0-0c85-41b6-9887-2492174c6228" satisfied condition "success or failure" May 11 16:10:01.121: INFO: Trying to get logs from node jerma-worker2 pod pod-3fe887f0-0c85-41b6-9887-2492174c6228 container test-container: STEP: delete the pod May 11 16:10:02.317: INFO: Waiting for pod pod-3fe887f0-0c85-41b6-9887-2492174c6228 to disappear May 11 16:10:02.348: INFO: Pod pod-3fe887f0-0c85-41b6-9887-2492174c6228 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:10:02.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8694" for this suite. • [SLOW TEST:8.376 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2140,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:10:02.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2671 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 16:10:03.099: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 16:10:34.519: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.196:8080/dial?request=hostname&protocol=http&host=10.244.1.22&port=8080&tries=1'] Namespace:pod-network-test-2671 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:10:34.519: INFO: >>> kubeConfig: /root/.kube/config I0511 16:10:34.547973 6 log.go:172] (0xc00278ae70) (0xc0022f5d60) Create stream I0511 16:10:34.548022 6 log.go:172] (0xc00278ae70) (0xc0022f5d60) Stream added, broadcasting: 1 I0511 16:10:34.550115 6 log.go:172] (0xc00278ae70) Reply frame received for 1 I0511 16:10:34.550175 6 log.go:172] (0xc00278ae70) (0xc0022f5e00) Create stream I0511 16:10:34.550197 6 log.go:172] (0xc00278ae70) (0xc0022f5e00) Stream added, broadcasting: 3 I0511 16:10:34.551281 6 log.go:172] (0xc00278ae70) Reply frame received for 3 I0511 16:10:34.551320 6 log.go:172] (0xc00278ae70) (0xc0022f5ea0) Create stream I0511 16:10:34.551337 6 log.go:172] (0xc00278ae70) (0xc0022f5ea0) Stream added, broadcasting: 5 I0511 16:10:34.552297 6 log.go:172] (0xc00278ae70) Reply frame received for 5 I0511 16:10:34.646450 6 log.go:172] (0xc00278ae70) Data frame received for 3 I0511 16:10:34.646474 6 log.go:172] (0xc0022f5e00) (3) Data frame handling I0511 16:10:34.646487 6 log.go:172] (0xc0022f5e00) (3) Data frame sent I0511 16:10:34.647329 6 log.go:172] (0xc00278ae70) Data frame received for 5 I0511 16:10:34.647362 6 log.go:172] (0xc0022f5ea0) (5) Data frame handling I0511 16:10:34.647407 6 log.go:172] (0xc00278ae70) Data frame received for 3 I0511 16:10:34.647423 6 log.go:172] (0xc0022f5e00) (3) Data frame handling I0511 16:10:34.649700 6 log.go:172] (0xc00278ae70) Data frame received for 1 I0511 16:10:34.649723 6 log.go:172] (0xc0022f5d60) (1) Data frame handling I0511 16:10:34.649734 6 log.go:172] (0xc0022f5d60) (1) Data frame sent I0511 16:10:34.649760 6 log.go:172] (0xc00278ae70) (0xc0022f5d60) Stream removed, broadcasting: 1 I0511 16:10:34.649784 6 log.go:172] (0xc00278ae70) Go away received I0511 16:10:34.649913 6 log.go:172] (0xc00278ae70) (0xc0022f5d60) Stream removed, broadcasting: 1 I0511 16:10:34.649944 6 log.go:172] (0xc00278ae70) (0xc0022f5e00) Stream removed, broadcasting: 3 I0511 16:10:34.649958 6 log.go:172] (0xc00278ae70) (0xc0022f5ea0) Stream removed, broadcasting: 5 May 11 16:10:34.650: INFO: Waiting for responses: map[] May 11 16:10:34.652: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.196:8080/dial?request=hostname&protocol=http&host=10.244.2.195&port=8080&tries=1'] Namespace:pod-network-test-2671 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:10:34.652: INFO: >>> kubeConfig: /root/.kube/config I0511 16:10:34.680733 6 log.go:172] (0xc0028aabb0) (0xc001b12500) Create stream I0511 16:10:34.680769 6 log.go:172] (0xc0028aabb0) (0xc001b12500) Stream added, broadcasting: 1 I0511 16:10:34.683472 6 log.go:172] (0xc0028aabb0) Reply frame received for 1 I0511 16:10:34.683546 6 log.go:172] (0xc0028aabb0) (0xc0028a1f40) Create stream I0511 16:10:34.683586 6 log.go:172] (0xc0028aabb0) (0xc0028a1f40) Stream added, broadcasting: 3 I0511 16:10:34.684627 6 log.go:172] (0xc0028aabb0) Reply frame received for 3 I0511 16:10:34.684672 6 log.go:172] (0xc0028aabb0) (0xc00134c1e0) Create stream I0511 16:10:34.684690 6 log.go:172] (0xc0028aabb0) (0xc00134c1e0) Stream added, broadcasting: 5 I0511 16:10:34.685998 6 log.go:172] (0xc0028aabb0) Reply frame received for 5 I0511 16:10:34.760731 6 log.go:172] (0xc0028aabb0) Data frame received for 3 I0511 16:10:34.760756 6 log.go:172] (0xc0028a1f40) (3) Data frame handling I0511 16:10:34.760771 6 log.go:172] (0xc0028a1f40) (3) Data frame sent I0511 16:10:34.761677 6 log.go:172] (0xc0028aabb0) Data frame received for 5 I0511 16:10:34.761716 6 log.go:172] (0xc00134c1e0) (5) Data frame handling I0511 16:10:34.761735 6 log.go:172] (0xc0028aabb0) Data frame received for 3 I0511 16:10:34.761744 6 log.go:172] (0xc0028a1f40) (3) Data frame handling I0511 16:10:34.763157 6 log.go:172] (0xc0028aabb0) Data frame received for 1 I0511 16:10:34.763177 6 log.go:172] (0xc001b12500) (1) Data frame handling I0511 16:10:34.763191 6 log.go:172] (0xc001b12500) (1) Data frame sent I0511 16:10:34.763206 6 log.go:172] (0xc0028aabb0) (0xc001b12500) Stream removed, broadcasting: 1 I0511 16:10:34.763246 6 log.go:172] (0xc0028aabb0) Go away received I0511 16:10:34.763292 6 log.go:172] (0xc0028aabb0) (0xc001b12500) Stream removed, broadcasting: 1 I0511 16:10:34.763312 6 log.go:172] (0xc0028aabb0) (0xc0028a1f40) Stream removed, broadcasting: 3 I0511 16:10:34.763322 6 log.go:172] (0xc0028aabb0) (0xc00134c1e0) Stream removed, broadcasting: 5 May 11 16:10:34.763: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:10:34.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2671" for this suite. • [SLOW TEST:32.327 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2147,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:10:34.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6004 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 11 16:10:35.010: INFO: Found 0 stateful pods, waiting for 3 May 11 16:10:46.056: INFO: Found 2 stateful pods, waiting for 3 May 11 16:10:55.015: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 16:10:55.015: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 16:10:55.015: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 11 16:10:55.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6004 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 16:10:55.274: INFO: stderr: "I0511 16:10:55.159591 2277 log.go:172] (0xc000928000) (0xc00075c0a0) Create stream\nI0511 16:10:55.159656 2277 log.go:172] (0xc000928000) (0xc00075c0a0) Stream added, broadcasting: 1\nI0511 16:10:55.162800 2277 log.go:172] (0xc000928000) Reply frame received for 1\nI0511 16:10:55.162839 2277 log.go:172] (0xc000928000) (0xc00075c140) Create stream\nI0511 16:10:55.162847 2277 log.go:172] (0xc000928000) (0xc00075c140) Stream added, broadcasting: 3\nI0511 16:10:55.163800 2277 log.go:172] (0xc000928000) Reply frame received for 3\nI0511 16:10:55.163859 2277 log.go:172] (0xc000928000) (0xc000134320) Create stream\nI0511 16:10:55.163897 2277 log.go:172] (0xc000928000) (0xc000134320) Stream added, broadcasting: 5\nI0511 16:10:55.164875 2277 log.go:172] (0xc000928000) Reply frame received for 5\nI0511 16:10:55.230393 2277 log.go:172] (0xc000928000) Data frame received for 5\nI0511 16:10:55.230412 2277 log.go:172] (0xc000134320) (5) Data frame handling\nI0511 16:10:55.230430 2277 log.go:172] (0xc000134320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 16:10:55.268005 2277 log.go:172] (0xc000928000) Data frame received for 5\nI0511 16:10:55.268038 2277 log.go:172] (0xc000134320) (5) Data frame handling\nI0511 16:10:55.268085 2277 log.go:172] (0xc000928000) Data frame received for 3\nI0511 16:10:55.268102 2277 log.go:172] (0xc00075c140) (3) Data frame handling\nI0511 16:10:55.268119 2277 log.go:172] (0xc00075c140) (3) Data frame sent\nI0511 16:10:55.268138 2277 log.go:172] (0xc000928000) Data frame received for 3\nI0511 16:10:55.268142 2277 log.go:172] (0xc00075c140) (3) Data frame handling\nI0511 16:10:55.270454 2277 log.go:172] (0xc000928000) Data frame received for 1\nI0511 16:10:55.270484 2277 log.go:172] (0xc00075c0a0) (1) Data frame handling\nI0511 16:10:55.270512 2277 log.go:172] (0xc00075c0a0) (1) Data frame sent\nI0511 16:10:55.270537 2277 log.go:172] (0xc000928000) (0xc00075c0a0) Stream removed, broadcasting: 1\nI0511 16:10:55.270564 2277 log.go:172] (0xc000928000) Go away received\nI0511 16:10:55.270941 2277 log.go:172] (0xc000928000) (0xc00075c0a0) Stream removed, broadcasting: 1\nI0511 16:10:55.270962 2277 log.go:172] (0xc000928000) (0xc00075c140) Stream removed, broadcasting: 3\nI0511 16:10:55.270971 2277 log.go:172] (0xc000928000) (0xc000134320) Stream removed, broadcasting: 5\n" May 11 16:10:55.275: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 16:10:55.275: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 11 16:11:05.315: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 11 16:11:15.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6004 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 16:11:15.648: INFO: stderr: "I0511 16:11:15.562470 2297 log.go:172] (0xc0000f4420) (0xc0004e9f40) Create stream\nI0511 16:11:15.562506 2297 log.go:172] (0xc0000f4420) (0xc0004e9f40) Stream added, broadcasting: 1\nI0511 16:11:15.573403 2297 log.go:172] (0xc0000f4420) Reply frame received for 1\nI0511 16:11:15.573441 2297 log.go:172] (0xc0000f4420) (0xc000782000) Create stream\nI0511 16:11:15.573450 2297 log.go:172] (0xc0000f4420) (0xc000782000) Stream added, broadcasting: 3\nI0511 16:11:15.574589 2297 log.go:172] (0xc0000f4420) Reply frame received for 3\nI0511 16:11:15.574613 2297 log.go:172] (0xc0000f4420) (0xc00023f2c0) Create stream\nI0511 16:11:15.574623 2297 log.go:172] (0xc0000f4420) (0xc00023f2c0) Stream added, broadcasting: 5\nI0511 16:11:15.575245 2297 log.go:172] (0xc0000f4420) Reply frame received for 5\nI0511 16:11:15.643270 2297 log.go:172] (0xc0000f4420) Data frame received for 5\nI0511 16:11:15.643310 2297 log.go:172] (0xc00023f2c0) (5) Data frame handling\nI0511 16:11:15.643325 2297 log.go:172] (0xc00023f2c0) (5) Data frame sent\nI0511 16:11:15.643342 2297 log.go:172] (0xc0000f4420) Data frame received for 5\nI0511 16:11:15.643351 2297 log.go:172] (0xc00023f2c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 16:11:15.643387 2297 log.go:172] (0xc0000f4420) Data frame received for 3\nI0511 16:11:15.643413 2297 log.go:172] (0xc000782000) (3) Data frame handling\nI0511 16:11:15.643429 2297 log.go:172] (0xc000782000) (3) Data frame sent\nI0511 16:11:15.643436 2297 log.go:172] (0xc0000f4420) Data frame received for 3\nI0511 16:11:15.643441 2297 log.go:172] (0xc000782000) (3) Data frame handling\nI0511 16:11:15.644611 2297 log.go:172] (0xc0000f4420) Data frame received for 1\nI0511 16:11:15.644623 2297 log.go:172] (0xc0004e9f40) (1) Data frame handling\nI0511 16:11:15.644632 2297 log.go:172] (0xc0004e9f40) (1) Data frame sent\nI0511 16:11:15.644642 2297 log.go:172] (0xc0000f4420) (0xc0004e9f40) Stream removed, broadcasting: 1\nI0511 16:11:15.644666 2297 log.go:172] (0xc0000f4420) Go away received\nI0511 16:11:15.644815 2297 log.go:172] (0xc0000f4420) (0xc0004e9f40) Stream removed, broadcasting: 1\nI0511 16:11:15.644827 2297 log.go:172] (0xc0000f4420) (0xc000782000) Stream removed, broadcasting: 3\nI0511 16:11:15.644834 2297 log.go:172] (0xc0000f4420) (0xc00023f2c0) Stream removed, broadcasting: 5\n" May 11 16:11:15.648: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 16:11:15.648: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 16:11:25.663: INFO: Waiting for StatefulSet statefulset-6004/ss2 to complete update May 11 16:11:25.663: INFO: Waiting for Pod statefulset-6004/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 16:11:25.663: INFO: Waiting for Pod statefulset-6004/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 16:11:25.663: INFO: Waiting for Pod statefulset-6004/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 16:11:35.697: INFO: Waiting for StatefulSet statefulset-6004/ss2 to complete update May 11 16:11:35.697: INFO: Waiting for Pod statefulset-6004/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 16:11:35.697: INFO: Waiting for Pod statefulset-6004/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 16:11:45.670: INFO: Waiting for StatefulSet statefulset-6004/ss2 to complete update May 11 16:11:45.670: INFO: Waiting for Pod statefulset-6004/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 11 16:11:55.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6004 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 16:11:55.939: INFO: stderr: "I0511 16:11:55.799846 2314 log.go:172] (0xc0009dc000) (0xc000739040) Create stream\nI0511 16:11:55.799944 2314 log.go:172] (0xc0009dc000) (0xc000739040) Stream added, broadcasting: 1\nI0511 16:11:55.801095 2314 log.go:172] (0xc0009dc000) Reply frame received for 1\nI0511 16:11:55.801271 2314 log.go:172] (0xc0009dc000) (0xc0007be0a0) Create stream\nI0511 16:11:55.801282 2314 log.go:172] (0xc0009dc000) (0xc0007be0a0) Stream added, broadcasting: 3\nI0511 16:11:55.802052 2314 log.go:172] (0xc0009dc000) Reply frame received for 3\nI0511 16:11:55.802080 2314 log.go:172] (0xc0009dc000) (0xc00076e000) Create stream\nI0511 16:11:55.802089 2314 log.go:172] (0xc0009dc000) (0xc00076e000) Stream added, broadcasting: 5\nI0511 16:11:55.802694 2314 log.go:172] (0xc0009dc000) Reply frame received for 5\nI0511 16:11:55.852774 2314 log.go:172] (0xc0009dc000) Data frame received for 5\nI0511 16:11:55.852809 2314 log.go:172] (0xc00076e000) (5) Data frame handling\nI0511 16:11:55.852826 2314 log.go:172] (0xc00076e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 16:11:55.934335 2314 log.go:172] (0xc0009dc000) Data frame received for 5\nI0511 16:11:55.934380 2314 log.go:172] (0xc00076e000) (5) Data frame handling\nI0511 16:11:55.934399 2314 log.go:172] (0xc0009dc000) Data frame received for 3\nI0511 16:11:55.934407 2314 log.go:172] (0xc0007be0a0) (3) Data frame handling\nI0511 16:11:55.934416 2314 log.go:172] (0xc0007be0a0) (3) Data frame sent\nI0511 16:11:55.934423 2314 log.go:172] (0xc0009dc000) Data frame received for 3\nI0511 16:11:55.934429 2314 log.go:172] (0xc0007be0a0) (3) Data frame handling\nI0511 16:11:55.935713 2314 log.go:172] (0xc0009dc000) Data frame received for 1\nI0511 16:11:55.935738 2314 log.go:172] (0xc000739040) (1) Data frame handling\nI0511 16:11:55.935753 2314 log.go:172] (0xc000739040) (1) Data frame sent\nI0511 16:11:55.935769 2314 log.go:172] (0xc0009dc000) (0xc000739040) Stream removed, broadcasting: 1\nI0511 16:11:55.935805 2314 log.go:172] (0xc0009dc000) Go away received\nI0511 16:11:55.936110 2314 log.go:172] (0xc0009dc000) (0xc000739040) Stream removed, broadcasting: 1\nI0511 16:11:55.936136 2314 log.go:172] (0xc0009dc000) (0xc0007be0a0) Stream removed, broadcasting: 3\nI0511 16:11:55.936148 2314 log.go:172] (0xc0009dc000) (0xc00076e000) Stream removed, broadcasting: 5\n" May 11 16:11:55.939: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 16:11:55.939: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 16:12:05.966: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 11 16:12:16.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6004 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 16:12:16.302: INFO: stderr: "I0511 16:12:16.198801 2331 log.go:172] (0xc0005ac210) (0xc0005d86e0) Create stream\nI0511 16:12:16.198877 2331 log.go:172] (0xc0005ac210) (0xc0005d86e0) Stream added, broadcasting: 1\nI0511 16:12:16.201233 2331 log.go:172] (0xc0005ac210) Reply frame received for 1\nI0511 16:12:16.201255 2331 log.go:172] (0xc0005ac210) (0xc0007574a0) Create stream\nI0511 16:12:16.201262 2331 log.go:172] (0xc0005ac210) (0xc0007574a0) Stream added, broadcasting: 3\nI0511 16:12:16.202050 2331 log.go:172] (0xc0005ac210) Reply frame received for 3\nI0511 16:12:16.202086 2331 log.go:172] (0xc0005ac210) (0xc0008d4000) Create stream\nI0511 16:12:16.202096 2331 log.go:172] (0xc0005ac210) (0xc0008d4000) Stream added, broadcasting: 5\nI0511 16:12:16.202909 2331 log.go:172] (0xc0005ac210) Reply frame received for 5\nI0511 16:12:16.294309 2331 log.go:172] (0xc0005ac210) Data frame received for 3\nI0511 16:12:16.294339 2331 log.go:172] (0xc0007574a0) (3) Data frame handling\nI0511 16:12:16.294348 2331 log.go:172] (0xc0007574a0) (3) Data frame sent\nI0511 16:12:16.294370 2331 log.go:172] (0xc0005ac210) Data frame received for 5\nI0511 16:12:16.294376 2331 log.go:172] (0xc0008d4000) (5) Data frame handling\nI0511 16:12:16.294383 2331 log.go:172] (0xc0008d4000) (5) Data frame sent\nI0511 16:12:16.294392 2331 log.go:172] (0xc0005ac210) Data frame received for 5\nI0511 16:12:16.294400 2331 log.go:172] (0xc0008d4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 16:12:16.294473 2331 log.go:172] (0xc0005ac210) Data frame received for 3\nI0511 16:12:16.294525 2331 log.go:172] (0xc0007574a0) (3) Data frame handling\nI0511 16:12:16.296143 2331 log.go:172] (0xc0005ac210) Data frame received for 1\nI0511 16:12:16.296155 2331 log.go:172] (0xc0005d86e0) (1) Data frame handling\nI0511 16:12:16.296173 2331 log.go:172] (0xc0005d86e0) (1) Data frame sent\nI0511 16:12:16.296349 2331 log.go:172] (0xc0005ac210) (0xc0005d86e0) Stream removed, broadcasting: 1\nI0511 16:12:16.296371 2331 log.go:172] (0xc0005ac210) Go away received\nI0511 16:12:16.296755 2331 log.go:172] (0xc0005ac210) (0xc0005d86e0) Stream removed, broadcasting: 1\nI0511 16:12:16.296776 2331 log.go:172] (0xc0005ac210) (0xc0007574a0) Stream removed, broadcasting: 3\nI0511 16:12:16.296786 2331 log.go:172] (0xc0005ac210) (0xc0008d4000) Stream removed, broadcasting: 5\n" May 11 16:12:16.302: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 16:12:16.302: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 16:12:26.415: INFO: Waiting for StatefulSet statefulset-6004/ss2 to complete update May 11 16:12:26.415: INFO: Waiting for Pod statefulset-6004/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 16:12:26.415: INFO: Waiting for Pod statefulset-6004/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 16:12:26.415: INFO: Waiting for Pod statefulset-6004/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 16:12:36.422: INFO: Waiting for StatefulSet statefulset-6004/ss2 to complete update May 11 16:12:36.422: INFO: Waiting for Pod statefulset-6004/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 16:12:36.422: INFO: Waiting for Pod statefulset-6004/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 16:12:46.440: INFO: Waiting for StatefulSet statefulset-6004/ss2 to complete update May 11 16:12:46.440: INFO: Waiting for Pod statefulset-6004/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 16:12:56.497: INFO: Waiting for StatefulSet statefulset-6004/ss2 to complete update May 11 16:12:56.497: INFO: Waiting for Pod statefulset-6004/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 16:13:06.422: INFO: Deleting all statefulset in ns statefulset-6004 May 11 16:13:06.424: INFO: Scaling statefulset ss2 to 0 May 11 16:13:46.680: INFO: Waiting for statefulset status.replicas updated to 0 May 11 16:13:46.683: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:13:47.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6004" for this suite. • [SLOW TEST:192.439 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":130,"skipped":2155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:13:47.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 11 16:13:47.421: INFO: >>> kubeConfig: /root/.kube/config May 11 16:13:50.467: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:14:02.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7243" for this suite. • [SLOW TEST:14.967 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":131,"skipped":2199,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:14:02.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3337 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 16:14:02.353: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 16:14:28.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.27:8080/dial?request=hostname&protocol=udp&host=10.244.1.26&port=8081&tries=1'] Namespace:pod-network-test-3337 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:14:28.788: INFO: >>> kubeConfig: /root/.kube/config I0511 16:14:29.774296 6 log.go:172] (0xc0061302c0) (0xc002711c20) Create stream I0511 16:14:29.774329 6 log.go:172] (0xc0061302c0) (0xc002711c20) Stream added, broadcasting: 1 I0511 16:14:29.775890 6 log.go:172] (0xc0061302c0) Reply frame received for 1 I0511 16:14:29.775936 6 log.go:172] (0xc0061302c0) (0xc0022f5680) Create stream I0511 16:14:29.775946 6 log.go:172] (0xc0061302c0) (0xc0022f5680) Stream added, broadcasting: 3 I0511 16:14:29.776868 6 log.go:172] (0xc0061302c0) Reply frame received for 3 I0511 16:14:29.776896 6 log.go:172] (0xc0061302c0) (0xc0028a0500) Create stream I0511 16:14:29.776904 6 log.go:172] (0xc0061302c0) (0xc0028a0500) Stream added, broadcasting: 5 I0511 16:14:29.778037 6 log.go:172] (0xc0061302c0) Reply frame received for 5 I0511 16:14:29.867473 6 log.go:172] (0xc0061302c0) Data frame received for 3 I0511 16:14:29.867529 6 log.go:172] (0xc0022f5680) (3) Data frame handling I0511 16:14:29.867557 6 log.go:172] (0xc0022f5680) (3) Data frame sent I0511 16:14:29.868038 6 log.go:172] (0xc0061302c0) Data frame received for 5 I0511 16:14:29.868073 6 log.go:172] (0xc0028a0500) (5) Data frame handling I0511 16:14:29.868096 6 log.go:172] (0xc0061302c0) Data frame received for 3 I0511 16:14:29.868107 6 log.go:172] (0xc0022f5680) (3) Data frame handling I0511 16:14:29.870021 6 log.go:172] (0xc0061302c0) Data frame received for 1 I0511 16:14:29.870054 6 log.go:172] (0xc002711c20) (1) Data frame handling I0511 16:14:29.870082 6 log.go:172] (0xc002711c20) (1) Data frame sent I0511 16:14:29.870106 6 log.go:172] (0xc0061302c0) (0xc002711c20) Stream removed, broadcasting: 1 I0511 16:14:29.870239 6 log.go:172] (0xc0061302c0) (0xc002711c20) Stream removed, broadcasting: 1 I0511 16:14:29.870268 6 log.go:172] (0xc0061302c0) (0xc0022f5680) Stream removed, broadcasting: 3 I0511 16:14:29.870287 6 log.go:172] (0xc0061302c0) (0xc0028a0500) Stream removed, broadcasting: 5 May 11 16:14:29.870: INFO: Waiting for responses: map[] I0511 16:14:29.870401 6 log.go:172] (0xc0061302c0) Go away received May 11 16:14:29.970: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.27:8080/dial?request=hostname&protocol=udp&host=10.244.2.203&port=8081&tries=1'] Namespace:pod-network-test-3337 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:14:29.970: INFO: >>> kubeConfig: /root/.kube/config I0511 16:14:30.183902 6 log.go:172] (0xc0059378c0) (0xc0028a0b40) Create stream I0511 16:14:30.183929 6 log.go:172] (0xc0059378c0) (0xc0028a0b40) Stream added, broadcasting: 1 I0511 16:14:30.186129 6 log.go:172] (0xc0059378c0) Reply frame received for 1 I0511 16:14:30.186169 6 log.go:172] (0xc0059378c0) (0xc00240a000) Create stream I0511 16:14:30.186307 6 log.go:172] (0xc0059378c0) (0xc00240a000) Stream added, broadcasting: 3 I0511 16:14:30.187269 6 log.go:172] (0xc0059378c0) Reply frame received for 3 I0511 16:14:30.187318 6 log.go:172] (0xc0059378c0) (0xc0028a0c80) Create stream I0511 16:14:30.187342 6 log.go:172] (0xc0059378c0) (0xc0028a0c80) Stream added, broadcasting: 5 I0511 16:14:30.188170 6 log.go:172] (0xc0059378c0) Reply frame received for 5 I0511 16:14:30.257776 6 log.go:172] (0xc0059378c0) Data frame received for 3 I0511 16:14:30.257823 6 log.go:172] (0xc00240a000) (3) Data frame handling I0511 16:14:30.257856 6 log.go:172] (0xc00240a000) (3) Data frame sent I0511 16:14:30.258636 6 log.go:172] (0xc0059378c0) Data frame received for 3 I0511 16:14:30.258664 6 log.go:172] (0xc00240a000) (3) Data frame handling I0511 16:14:30.258689 6 log.go:172] (0xc0059378c0) Data frame received for 5 I0511 16:14:30.258723 6 log.go:172] (0xc0028a0c80) (5) Data frame handling I0511 16:14:30.260216 6 log.go:172] (0xc0059378c0) Data frame received for 1 I0511 16:14:30.260243 6 log.go:172] (0xc0028a0b40) (1) Data frame handling I0511 16:14:30.260281 6 log.go:172] (0xc0028a0b40) (1) Data frame sent I0511 16:14:30.260315 6 log.go:172] (0xc0059378c0) (0xc0028a0b40) Stream removed, broadcasting: 1 I0511 16:14:30.260380 6 log.go:172] (0xc0059378c0) Go away received I0511 16:14:30.260436 6 log.go:172] (0xc0059378c0) (0xc0028a0b40) Stream removed, broadcasting: 1 I0511 16:14:30.260460 6 log.go:172] (0xc0059378c0) (0xc00240a000) Stream removed, broadcasting: 3 I0511 16:14:30.260484 6 log.go:172] (0xc0059378c0) (0xc0028a0c80) Stream removed, broadcasting: 5 May 11 16:14:30.260: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:14:30.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3337" for this suite. • [SLOW TEST:28.092 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:14:30.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 16:14:30.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5791' May 11 16:14:31.152: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 16:14:31.152: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 11 16:14:33.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5791' May 11 16:14:33.548: INFO: stderr: "" May 11 16:14:33.548: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:14:33.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5791" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":133,"skipped":2252,"failed":0} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:14:33.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-1858 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1858 to expose endpoints map[] May 11 16:14:34.990: INFO: Get endpoints failed (2.630718ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 11 16:14:36.032: INFO: successfully validated that service multi-endpoint-test in namespace services-1858 exposes endpoints map[] (1.044351081s elapsed) STEP: Creating pod pod1 in namespace services-1858 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1858 to expose endpoints map[pod1:[100]] May 11 16:14:41.664: INFO: successfully validated that service multi-endpoint-test in namespace services-1858 exposes endpoints map[pod1:[100]] (5.351140294s elapsed) STEP: Creating pod pod2 in namespace services-1858 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1858 to expose endpoints map[pod1:[100] pod2:[101]] May 11 16:14:46.029: INFO: successfully validated that service multi-endpoint-test in namespace services-1858 exposes endpoints map[pod1:[100] pod2:[101]] (4.361186634s elapsed) STEP: Deleting pod pod1 in namespace services-1858 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1858 to expose endpoints map[pod2:[101]] May 11 16:14:47.837: INFO: successfully validated that service multi-endpoint-test in namespace services-1858 exposes endpoints map[pod2:[101]] (1.803951075s elapsed) STEP: Deleting pod pod2 in namespace services-1858 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1858 to expose endpoints map[] May 11 16:14:49.370: INFO: successfully validated that service multi-endpoint-test in namespace services-1858 exposes endpoints map[] (1.325284635s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:14:50.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1858" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:16.660 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":134,"skipped":2254,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:14:50.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 11 16:14:50.898: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 16:14:50.930: INFO: Waiting for terminating namespaces to be deleted... May 11 16:14:50.933: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 11 16:14:50.946: INFO: pod1 from services-1858 started at 2020-05-11 16:14:36 +0000 UTC (1 container statuses recorded) May 11 16:14:50.946: INFO: Container pause ready: false, restart count 0 May 11 16:14:50.946: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 16:14:50.946: INFO: Container kindnet-cni ready: true, restart count 0 May 11 16:14:50.946: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 16:14:50.946: INFO: Container kube-proxy ready: true, restart count 0 May 11 16:14:50.946: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 11 16:14:50.963: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 11 16:14:50.963: INFO: Container kube-hunter ready: false, restart count 0 May 11 16:14:50.963: INFO: pod2 from services-1858 started at 2020-05-11 16:14:41 +0000 UTC (1 container statuses recorded) May 11 16:14:50.963: INFO: Container pause ready: false, restart count 0 May 11 16:14:50.963: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 16:14:50.963: INFO: Container kindnet-cni ready: true, restart count 0 May 11 16:14:50.963: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 11 16:14:50.963: INFO: Container kube-bench ready: false, restart count 0 May 11 16:14:50.963: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 16:14:50.963: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3f263a25-a8a4-414a-84b3-4e00b0e11ad5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3f263a25-a8a4-414a-84b3-4e00b0e11ad5 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3f263a25-a8a4-414a-84b3-4e00b0e11ad5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:15:03.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9115" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:12.955 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":135,"skipped":2263,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:15:03.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:15:04.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-373b0b06-aa8e-41cd-82de-34d069a7f3bd" in namespace "downward-api-7993" to be "success or failure" May 11 16:15:04.280: INFO: Pod "downwardapi-volume-373b0b06-aa8e-41cd-82de-34d069a7f3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 137.260268ms May 11 16:15:06.333: INFO: Pod "downwardapi-volume-373b0b06-aa8e-41cd-82de-34d069a7f3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191041758s May 11 16:15:08.544: INFO: Pod "downwardapi-volume-373b0b06-aa8e-41cd-82de-34d069a7f3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401427039s May 11 16:15:10.669: INFO: Pod "downwardapi-volume-373b0b06-aa8e-41cd-82de-34d069a7f3bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.52688138s STEP: Saw pod success May 11 16:15:10.669: INFO: Pod "downwardapi-volume-373b0b06-aa8e-41cd-82de-34d069a7f3bd" satisfied condition "success or failure" May 11 16:15:10.672: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-373b0b06-aa8e-41cd-82de-34d069a7f3bd container client-container: STEP: delete the pod May 11 16:15:11.055: INFO: Waiting for pod downwardapi-volume-373b0b06-aa8e-41cd-82de-34d069a7f3bd to disappear May 11 16:15:11.250: INFO: Pod downwardapi-volume-373b0b06-aa8e-41cd-82de-34d069a7f3bd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:15:11.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7993" for this suite. • [SLOW TEST:8.085 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2285,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:15:11.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 16:15:18.559: INFO: Successfully updated pod "pod-update-41875449-40a7-4a63-81e6-3b9a636b0944" STEP: verifying the updated pod is in kubernetes May 11 16:15:18.864: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:15:18.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3557" for this suite. • [SLOW TEST:7.632 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2290,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:15:18.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 11 16:15:19.711: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-a de86e927-b133-44f2-aa4b-0fa20a6082e1 15278482 0 2020-05-11 16:15:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 16:15:19.712: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-a de86e927-b133-44f2-aa4b-0fa20a6082e1 15278482 0 2020-05-11 16:15:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 11 16:15:29.851: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-a de86e927-b133-44f2-aa4b-0fa20a6082e1 15278522 0 2020-05-11 16:15:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 11 16:15:29.851: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-a de86e927-b133-44f2-aa4b-0fa20a6082e1 15278522 0 2020-05-11 16:15:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 11 16:15:39.860: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-a de86e927-b133-44f2-aa4b-0fa20a6082e1 15278551 0 2020-05-11 16:15:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 16:15:39.860: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-a de86e927-b133-44f2-aa4b-0fa20a6082e1 15278551 0 2020-05-11 16:15:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 11 16:15:49.866: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-a de86e927-b133-44f2-aa4b-0fa20a6082e1 15278582 0 2020-05-11 16:15:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 16:15:49.866: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-a de86e927-b133-44f2-aa4b-0fa20a6082e1 15278582 0 2020-05-11 16:15:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 11 16:15:59.872: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-b b4c93d79-da39-4b91-9325-6aab606b0fce 15278610 0 2020-05-11 16:15:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 16:15:59.872: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-b b4c93d79-da39-4b91-9325-6aab606b0fce 15278610 0 2020-05-11 16:15:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 11 16:16:09.967: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-b b4c93d79-da39-4b91-9325-6aab606b0fce 15278637 0 2020-05-11 16:15:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 16:16:09.967: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3564 /api/v1/namespaces/watch-3564/configmaps/e2e-watch-test-configmap-b b4c93d79-da39-4b91-9325-6aab606b0fce 15278637 0 2020-05-11 16:15:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:16:19.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3564" for this suite. • [SLOW TEST:61.089 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":138,"skipped":2298,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:16:19.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 11 16:16:20.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-362 -- logs-generator --log-lines-total 100 --run-duration 20s' May 11 16:16:23.749: INFO: stderr: "" May 11 16:16:23.749: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 11 16:16:23.749: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 11 16:16:23.750: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-362" to be "running and ready, or succeeded" May 11 16:16:23.850: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 100.595979ms May 11 16:16:25.853: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103794149s May 11 16:16:27.857: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107537554s May 11 16:16:29.987: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.237808129s May 11 16:16:29.987: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 11 16:16:29.987: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 11 16:16:29.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-362' May 11 16:16:30.429: INFO: stderr: "" May 11 16:16:30.429: INFO: stdout: "I0511 16:16:28.273852 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/6x8 358\nI0511 16:16:28.473994 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/8wj 376\nI0511 16:16:28.674119 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/9gg 395\nI0511 16:16:28.874021 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/lxk 461\nI0511 16:16:29.074012 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/b9w 573\nI0511 16:16:29.274026 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/9tk 537\nI0511 16:16:29.474034 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/t8n8 458\nI0511 16:16:29.674022 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/hf7 505\nI0511 16:16:29.874014 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/88m 240\nI0511 16:16:30.073992 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/5mpx 347\nI0511 16:16:30.273987 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/jsc 338\n" STEP: limiting log lines May 11 16:16:30.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-362 --tail=1' May 11 16:16:30.672: INFO: stderr: "" May 11 16:16:30.672: INFO: stdout: "I0511 16:16:30.473981 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/5k9 328\n" May 11 16:16:30.672: INFO: got output "I0511 16:16:30.473981 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/5k9 328\n" STEP: limiting log bytes May 11 16:16:30.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-362 --limit-bytes=1' May 11 16:16:30.784: INFO: stderr: "" May 11 16:16:30.784: INFO: stdout: "I" May 11 16:16:30.784: INFO: got output "I" STEP: exposing timestamps May 11 16:16:30.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-362 --tail=1 --timestamps' May 11 16:16:30.896: INFO: stderr: "" May 11 16:16:30.896: INFO: stdout: "2020-05-11T16:16:30.874080956Z I0511 16:16:30.873962 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/6qw7 564\n" May 11 16:16:30.896: INFO: got output "2020-05-11T16:16:30.874080956Z I0511 16:16:30.873962 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/6qw7 564\n" STEP: restricting to a time range May 11 16:16:33.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-362 --since=1s' May 11 16:16:33.496: INFO: stderr: "" May 11 16:16:33.496: INFO: stdout: "I0511 16:16:32.674238 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/8cfr 460\nI0511 16:16:32.873975 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/h5z 367\nI0511 16:16:33.073974 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/m9k4 455\nI0511 16:16:33.273967 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/wjlm 468\nI0511 16:16:33.473948 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/2pq 376\n" May 11 16:16:33.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-362 --since=24h' May 11 16:16:33.589: INFO: stderr: "" May 11 16:16:33.589: INFO: stdout: "I0511 16:16:28.273852 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/6x8 358\nI0511 16:16:28.473994 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/8wj 376\nI0511 16:16:28.674119 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/9gg 395\nI0511 16:16:28.874021 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/lxk 461\nI0511 16:16:29.074012 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/b9w 573\nI0511 16:16:29.274026 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/9tk 537\nI0511 16:16:29.474034 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/t8n8 458\nI0511 16:16:29.674022 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/hf7 505\nI0511 16:16:29.874014 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/88m 240\nI0511 16:16:30.073992 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/5mpx 347\nI0511 16:16:30.273987 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/jsc 338\nI0511 16:16:30.473981 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/5k9 328\nI0511 16:16:30.673987 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/r9l 266\nI0511 16:16:30.873962 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/6qw7 564\nI0511 16:16:31.073994 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/dxtt 539\nI0511 16:16:31.273963 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/v4pm 324\nI0511 16:16:31.473977 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/zss 552\nI0511 16:16:31.673999 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/nh9r 299\nI0511 16:16:31.874005 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/vvb 332\nI0511 16:16:32.073953 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/xkpq 377\nI0511 16:16:32.273971 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/kqh 303\nI0511 16:16:32.474077 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/rt26 541\nI0511 16:16:32.674238 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/8cfr 460\nI0511 16:16:32.873975 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/h5z 367\nI0511 16:16:33.073974 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/m9k4 455\nI0511 16:16:33.273967 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/wjlm 468\nI0511 16:16:33.473948 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/2pq 376\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 11 16:16:33.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-362' May 11 16:16:39.254: INFO: stderr: "" May 11 16:16:39.254: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:16:39.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-362" for this suite. • [SLOW TEST:19.291 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":139,"skipped":2309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:16:39.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:16:39.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88eb70ef-6869-4de2-b6fc-9bb48f70f6ee" in namespace "downward-api-8124" to be "success or failure" May 11 16:16:39.397: INFO: Pod "downwardapi-volume-88eb70ef-6869-4de2-b6fc-9bb48f70f6ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.638076ms May 11 16:16:42.318: INFO: Pod "downwardapi-volume-88eb70ef-6869-4de2-b6fc-9bb48f70f6ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.924665476s May 11 16:16:44.477: INFO: Pod "downwardapi-volume-88eb70ef-6869-4de2-b6fc-9bb48f70f6ee": Phase="Pending", Reason="", readiness=false. Elapsed: 5.083786749s May 11 16:16:46.480: INFO: Pod "downwardapi-volume-88eb70ef-6869-4de2-b6fc-9bb48f70f6ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.0867875s STEP: Saw pod success May 11 16:16:46.481: INFO: Pod "downwardapi-volume-88eb70ef-6869-4de2-b6fc-9bb48f70f6ee" satisfied condition "success or failure" May 11 16:16:46.483: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-88eb70ef-6869-4de2-b6fc-9bb48f70f6ee container client-container: STEP: delete the pod May 11 16:16:46.678: INFO: Waiting for pod downwardapi-volume-88eb70ef-6869-4de2-b6fc-9bb48f70f6ee to disappear May 11 16:16:46.902: INFO: Pod downwardapi-volume-88eb70ef-6869-4de2-b6fc-9bb48f70f6ee no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:16:46.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8124" for this suite. • [SLOW TEST:7.638 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2335,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:16:46.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-06b34b95-0fdf-4bd1-8f1a-1d56a384d205 STEP: Creating a pod to test consume secrets May 11 16:16:47.157: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f3e1dd25-cadc-4450-b187-2e43e4d459ae" in namespace "projected-4908" to be "success or failure" May 11 16:16:47.166: INFO: Pod "pod-projected-secrets-f3e1dd25-cadc-4450-b187-2e43e4d459ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.601179ms May 11 16:16:49.288: INFO: Pod "pod-projected-secrets-f3e1dd25-cadc-4450-b187-2e43e4d459ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130441104s May 11 16:16:51.407: INFO: Pod "pod-projected-secrets-f3e1dd25-cadc-4450-b187-2e43e4d459ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249679375s May 11 16:16:53.411: INFO: Pod "pod-projected-secrets-f3e1dd25-cadc-4450-b187-2e43e4d459ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.253474675s STEP: Saw pod success May 11 16:16:53.411: INFO: Pod "pod-projected-secrets-f3e1dd25-cadc-4450-b187-2e43e4d459ae" satisfied condition "success or failure" May 11 16:16:53.414: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-f3e1dd25-cadc-4450-b187-2e43e4d459ae container projected-secret-volume-test: STEP: delete the pod May 11 16:16:53.497: INFO: Waiting for pod pod-projected-secrets-f3e1dd25-cadc-4450-b187-2e43e4d459ae to disappear May 11 16:16:53.500: INFO: Pod pod-projected-secrets-f3e1dd25-cadc-4450-b187-2e43e4d459ae no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:16:53.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4908" for this suite. • [SLOW TEST:6.599 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2352,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:16:53.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 11 16:17:00.772: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:17:01.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6670" for this suite. • [SLOW TEST:8.339 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":142,"skipped":2358,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:17:01.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-33e7b703-cae6-4118-9fdc-5643633107a7 STEP: Creating a pod to test consume secrets May 11 16:17:03.979: INFO: Waiting up to 5m0s for pod "pod-secrets-c94df3b3-0a61-484e-828b-b8cfbadc4330" in namespace "secrets-4488" to be "success or failure" May 11 16:17:04.190: INFO: Pod "pod-secrets-c94df3b3-0a61-484e-828b-b8cfbadc4330": Phase="Pending", Reason="", readiness=false. Elapsed: 210.361751ms May 11 16:17:06.193: INFO: Pod "pod-secrets-c94df3b3-0a61-484e-828b-b8cfbadc4330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213544502s May 11 16:17:08.351: INFO: Pod "pod-secrets-c94df3b3-0a61-484e-828b-b8cfbadc4330": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371624188s May 11 16:17:10.354: INFO: Pod "pod-secrets-c94df3b3-0a61-484e-828b-b8cfbadc4330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.374444972s STEP: Saw pod success May 11 16:17:10.354: INFO: Pod "pod-secrets-c94df3b3-0a61-484e-828b-b8cfbadc4330" satisfied condition "success or failure" May 11 16:17:10.356: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c94df3b3-0a61-484e-828b-b8cfbadc4330 container secret-volume-test: STEP: delete the pod May 11 16:17:10.475: INFO: Waiting for pod pod-secrets-c94df3b3-0a61-484e-828b-b8cfbadc4330 to disappear May 11 16:17:10.630: INFO: Pod pod-secrets-c94df3b3-0a61-484e-828b-b8cfbadc4330 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:17:10.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4488" for this suite. STEP: Destroying namespace "secret-namespace-8093" for this suite. • [SLOW TEST:8.831 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2373,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:17:10.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-7f62101c-9864-47eb-8044-b460b156d23c STEP: Creating a pod to test consume secrets May 11 16:17:11.828: INFO: Waiting up to 5m0s for pod "pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee" in namespace "secrets-6654" to be "success or failure" May 11 16:17:12.103: INFO: Pod "pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee": Phase="Pending", Reason="", readiness=false. Elapsed: 274.892879ms May 11 16:17:14.106: INFO: Pod "pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277921518s May 11 16:17:16.530: INFO: Pod "pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.701486918s May 11 16:17:18.924: INFO: Pod "pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee": Phase="Pending", Reason="", readiness=false. Elapsed: 7.095409547s May 11 16:17:21.306: INFO: Pod "pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee": Phase="Pending", Reason="", readiness=false. Elapsed: 9.478205465s May 11 16:17:23.743: INFO: Pod "pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.915181657s STEP: Saw pod success May 11 16:17:23.743: INFO: Pod "pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee" satisfied condition "success or failure" May 11 16:17:23.746: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee container secret-volume-test: STEP: delete the pod May 11 16:17:24.170: INFO: Waiting for pod pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee to disappear May 11 16:17:24.203: INFO: Pod pod-secrets-b3c3887a-ebcd-4e44-90aa-294cfc474fee no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:17:24.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6654" for this suite. • [SLOW TEST:13.532 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:17:24.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:18:02.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6533" for this suite. STEP: Destroying namespace "nsdeletetest-5716" for this suite. May 11 16:18:02.486: INFO: Namespace nsdeletetest-5716 was already deleted STEP: Destroying namespace "nsdeletetest-7446" for this suite. • [SLOW TEST:38.293 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":145,"skipped":2432,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:18:02.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:18:02.810: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ddc406f-f53f-4581-86a6-86ca2e8bb65e" in namespace "projected-2350" to be "success or failure" May 11 16:18:02.827: INFO: Pod "downwardapi-volume-2ddc406f-f53f-4581-86a6-86ca2e8bb65e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.855835ms May 11 16:18:04.935: INFO: Pod "downwardapi-volume-2ddc406f-f53f-4581-86a6-86ca2e8bb65e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124696853s May 11 16:18:07.103: INFO: Pod "downwardapi-volume-2ddc406f-f53f-4581-86a6-86ca2e8bb65e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292256536s May 11 16:18:09.106: INFO: Pod "downwardapi-volume-2ddc406f-f53f-4581-86a6-86ca2e8bb65e": Phase="Running", Reason="", readiness=true. Elapsed: 6.295197001s May 11 16:18:11.111: INFO: Pod "downwardapi-volume-2ddc406f-f53f-4581-86a6-86ca2e8bb65e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.300321952s STEP: Saw pod success May 11 16:18:11.111: INFO: Pod "downwardapi-volume-2ddc406f-f53f-4581-86a6-86ca2e8bb65e" satisfied condition "success or failure" May 11 16:18:11.113: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-2ddc406f-f53f-4581-86a6-86ca2e8bb65e container client-container: STEP: delete the pod May 11 16:18:11.243: INFO: Waiting for pod downwardapi-volume-2ddc406f-f53f-4581-86a6-86ca2e8bb65e to disappear May 11 16:18:11.303: INFO: Pod downwardapi-volume-2ddc406f-f53f-4581-86a6-86ca2e8bb65e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:18:11.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2350" for this suite. • [SLOW TEST:8.804 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2442,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:18:11.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 16:18:13.095: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 16:18:15.103: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:18:17.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:18:19.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810693, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 16:18:22.268: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:18:22.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6112" for this suite. STEP: Destroying namespace "webhook-6112-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.319 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":147,"skipped":2444,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:18:22.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7kn5k in namespace proxy-9912 I0511 16:18:22.978175 6 runners.go:189] Created replication controller with name: proxy-service-7kn5k, namespace: proxy-9912, replica count: 1 I0511 16:18:24.028765 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:18:25.028952 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:18:26.029280 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:18:27.029431 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:18:28.029665 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:18:29.029850 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:18:30.030077 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 16:18:31.030287 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 16:18:32.030482 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 16:18:33.030732 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 16:18:34.030971 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 16:18:35.031110 6 runners.go:189] proxy-service-7kn5k Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 16:18:35.269: INFO: setup took 12.573119864s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 11 16:18:35.275: INFO: (0) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 5.832392ms) May 11 16:18:35.277: INFO: (0) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 7.987697ms) May 11 16:18:35.278: INFO: (0) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 8.597729ms) May 11 16:18:35.278: INFO: (0) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 8.655628ms) May 11 16:18:35.278: INFO: (0) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 8.856045ms) May 11 16:18:35.279: INFO: (0) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 9.321257ms) May 11 16:18:35.280: INFO: (0) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 10.892031ms) May 11 16:18:35.280: INFO: (0) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 10.920286ms) May 11 16:18:35.280: INFO: (0) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 11.300268ms) May 11 16:18:35.281: INFO: (0) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 11.475365ms) May 11 16:18:35.281: INFO: (0) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 11.948002ms) May 11 16:18:35.284: INFO: (0) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test<... (200; 5.00408ms) May 11 16:18:35.292: INFO: (1) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: ... (200; 5.337402ms) May 11 16:18:35.292: INFO: (1) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 4.946461ms) May 11 16:18:35.293: INFO: (1) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 5.615566ms) May 11 16:18:35.293: INFO: (1) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 5.373396ms) May 11 16:18:35.293: INFO: (1) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 5.749599ms) May 11 16:18:35.293: INFO: (1) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 5.958982ms) May 11 16:18:35.298: INFO: (2) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 4.281326ms) May 11 16:18:35.298: INFO: (2) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.234603ms) May 11 16:18:35.298: INFO: (2) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.317851ms) May 11 16:18:35.298: INFO: (2) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 4.357396ms) May 11 16:18:35.298: INFO: (2) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 5.015399ms) May 11 16:18:35.299: INFO: (2) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 5.349236ms) May 11 16:18:35.299: INFO: (2) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 5.323349ms) May 11 16:18:35.299: INFO: (2) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 5.309532ms) May 11 16:18:35.299: INFO: (2) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test<... (200; 5.389158ms) May 11 16:18:35.299: INFO: (2) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 5.464933ms) May 11 16:18:35.299: INFO: (2) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 5.752783ms) May 11 16:18:35.299: INFO: (2) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname2/proxy/: tls qux (200; 5.663387ms) May 11 16:18:35.299: INFO: (2) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 6.044236ms) May 11 16:18:35.300: INFO: (2) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 6.195399ms) May 11 16:18:35.300: INFO: (2) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 6.297977ms) May 11 16:18:35.304: INFO: (3) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 3.715866ms) May 11 16:18:35.304: INFO: (3) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 3.811099ms) May 11 16:18:35.304: INFO: (3) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 3.816469ms) May 11 16:18:35.304: INFO: (3) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 4.021997ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 4.769834ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 5.0641ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 5.078188ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 5.391628ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 5.333464ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 5.441396ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 5.403136ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 5.389129ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 5.381063ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 5.425207ms) May 11 16:18:35.305: INFO: (3) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test<... (200; 4.544955ms) May 11 16:18:35.310: INFO: (4) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.891271ms) May 11 16:18:35.311: INFO: (4) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 5.126714ms) May 11 16:18:35.311: INFO: (4) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 5.1275ms) May 11 16:18:35.311: INFO: (4) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 5.283456ms) May 11 16:18:35.311: INFO: (4) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 5.696599ms) May 11 16:18:35.312: INFO: (4) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 6.670972ms) May 11 16:18:35.312: INFO: (4) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 6.666747ms) May 11 16:18:35.312: INFO: (4) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 6.809626ms) May 11 16:18:35.313: INFO: (4) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 7.771818ms) May 11 16:18:35.313: INFO: (4) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 7.758944ms) May 11 16:18:35.313: INFO: (4) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test<... (200; 6.742584ms) May 11 16:18:35.488: INFO: (5) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 6.817995ms) May 11 16:18:35.488: INFO: (5) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 6.716519ms) May 11 16:18:35.488: INFO: (5) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test (200; 9.121479ms) May 11 16:18:35.491: INFO: (5) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 9.645067ms) May 11 16:18:35.491: INFO: (5) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 9.669431ms) May 11 16:18:35.491: INFO: (5) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 9.650647ms) May 11 16:18:35.491: INFO: (5) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname2/proxy/: tls qux (200; 10.028583ms) May 11 16:18:35.491: INFO: (5) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 9.941615ms) May 11 16:18:35.494: INFO: (5) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 13.093795ms) May 11 16:18:35.494: INFO: (5) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 13.067211ms) May 11 16:18:35.494: INFO: (5) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 13.168044ms) May 11 16:18:35.494: INFO: (5) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 13.214376ms) May 11 16:18:35.499: INFO: (6) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 4.142455ms) May 11 16:18:35.499: INFO: (6) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.611232ms) May 11 16:18:35.499: INFO: (6) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 4.868109ms) May 11 16:18:35.500: INFO: (6) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 4.891943ms) May 11 16:18:35.500: INFO: (6) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 5.009568ms) May 11 16:18:35.500: INFO: (6) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 5.098217ms) May 11 16:18:35.500: INFO: (6) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 5.155172ms) May 11 16:18:35.500: INFO: (6) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 5.353044ms) May 11 16:18:35.500: INFO: (6) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 5.393673ms) May 11 16:18:35.500: INFO: (6) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 5.426336ms) May 11 16:18:35.500: INFO: (6) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test<... (200; 4.822824ms) May 11 16:18:35.507: INFO: (7) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 4.855124ms) May 11 16:18:35.509: INFO: (7) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 6.504169ms) May 11 16:18:35.509: INFO: (7) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 6.902202ms) May 11 16:18:35.509: INFO: (7) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 6.778555ms) May 11 16:18:35.509: INFO: (7) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 6.851014ms) May 11 16:18:35.510: INFO: (7) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test (200; 4.686415ms) May 11 16:18:35.516: INFO: (8) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 5.009019ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 5.117355ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 5.146668ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 5.405734ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 5.54276ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 5.674128ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 5.711446ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 5.813631ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 5.795029ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 5.728684ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 5.861334ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 5.914559ms) May 11 16:18:35.517: INFO: (8) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test (200; 4.051499ms) May 11 16:18:35.522: INFO: (9) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 4.202967ms) May 11 16:18:35.522: INFO: (9) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 4.132848ms) May 11 16:18:35.522: INFO: (9) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 4.16987ms) May 11 16:18:35.522: INFO: (9) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 4.226794ms) May 11 16:18:35.523: INFO: (9) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 5.003469ms) May 11 16:18:35.523: INFO: (9) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 5.193548ms) May 11 16:18:35.523: INFO: (9) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 5.150983ms) May 11 16:18:35.523: INFO: (9) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname2/proxy/: tls qux (200; 5.311887ms) May 11 16:18:35.523: INFO: (9) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 5.299522ms) May 11 16:18:35.523: INFO: (9) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 5.310315ms) May 11 16:18:35.526: INFO: (10) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 2.132121ms) May 11 16:18:35.527: INFO: (10) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 3.287396ms) May 11 16:18:35.527: INFO: (10) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test<... (200; 6.846798ms) May 11 16:18:35.531: INFO: (10) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 6.884786ms) May 11 16:18:35.531: INFO: (10) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 6.873357ms) May 11 16:18:35.531: INFO: (10) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 6.930932ms) May 11 16:18:35.531: INFO: (10) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 6.898913ms) May 11 16:18:35.531: INFO: (10) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 6.915042ms) May 11 16:18:35.531: INFO: (10) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname2/proxy/: tls qux (200; 6.97286ms) May 11 16:18:35.531: INFO: (10) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 6.96323ms) May 11 16:18:35.531: INFO: (10) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 6.98529ms) May 11 16:18:35.531: INFO: (10) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 7.108485ms) May 11 16:18:35.531: INFO: (10) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 7.216168ms) May 11 16:18:35.534: INFO: (11) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test (200; 4.383124ms) May 11 16:18:35.535: INFO: (11) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.346924ms) May 11 16:18:35.535: INFO: (11) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 4.406758ms) May 11 16:18:35.535: INFO: (11) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 4.376754ms) May 11 16:18:35.536: INFO: (11) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.596121ms) May 11 16:18:35.536: INFO: (11) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 4.927434ms) May 11 16:18:35.536: INFO: (11) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 5.292506ms) May 11 16:18:35.536: INFO: (11) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 5.268387ms) May 11 16:18:35.536: INFO: (11) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 5.392391ms) May 11 16:18:35.536: INFO: (11) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 5.335459ms) May 11 16:18:35.536: INFO: (11) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 5.446556ms) May 11 16:18:35.536: INFO: (11) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 5.499457ms) May 11 16:18:35.536: INFO: (11) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname2/proxy/: tls qux (200; 5.480745ms) May 11 16:18:35.536: INFO: (11) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 5.48856ms) May 11 16:18:35.539: INFO: (12) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 2.773666ms) May 11 16:18:35.539: INFO: (12) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 2.690268ms) May 11 16:18:35.541: INFO: (12) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 4.253ms) May 11 16:18:35.542: INFO: (12) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 4.821877ms) May 11 16:18:35.542: INFO: (12) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.768598ms) May 11 16:18:35.542: INFO: (12) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 5.111318ms) May 11 16:18:35.542: INFO: (12) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 5.173085ms) May 11 16:18:35.542: INFO: (12) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 5.150965ms) May 11 16:18:35.542: INFO: (12) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 5.09083ms) May 11 16:18:35.542: INFO: (12) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 5.262171ms) May 11 16:18:35.542: INFO: (12) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test (200; 5.074068ms) May 11 16:18:35.550: INFO: (13) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 5.172844ms) May 11 16:18:35.550: INFO: (13) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 5.116366ms) May 11 16:18:35.550: INFO: (13) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: ... (200; 3.762967ms) May 11 16:18:35.556: INFO: (14) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 4.135209ms) May 11 16:18:35.556: INFO: (14) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.207086ms) May 11 16:18:35.556: INFO: (14) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 4.365094ms) May 11 16:18:35.556: INFO: (14) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 4.352932ms) May 11 16:18:35.556: INFO: (14) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 4.326982ms) May 11 16:18:35.556: INFO: (14) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.350802ms) May 11 16:18:35.556: INFO: (14) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 4.395825ms) May 11 16:18:35.556: INFO: (14) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 4.412351ms) May 11 16:18:35.556: INFO: (14) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 4.406222ms) May 11 16:18:35.556: INFO: (14) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test<... (200; 2.593035ms) May 11 16:18:35.559: INFO: (15) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: ... (200; 3.43335ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 3.520164ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 3.529847ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 3.607227ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 3.644102ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 3.675111ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 3.676898ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 3.627694ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 3.924405ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 3.97872ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 3.924047ms) May 11 16:18:35.560: INFO: (15) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname2/proxy/: tls qux (200; 3.989485ms) May 11 16:18:35.564: INFO: (16) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 3.225994ms) May 11 16:18:35.564: INFO: (16) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 3.982086ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 4.242815ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname2/proxy/: tls qux (200; 4.195537ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 4.453465ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 4.546119ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 4.485108ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 4.692574ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 4.655777ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 4.816867ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 4.820443ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.987966ms) May 11 16:18:35.565: INFO: (16) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: ... (200; 2.351894ms) May 11 16:18:35.568: INFO: (17) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 2.623736ms) May 11 16:18:35.569: INFO: (17) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test (200; 4.156508ms) May 11 16:18:35.570: INFO: (17) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 4.409683ms) May 11 16:18:35.570: INFO: (17) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 4.441294ms) May 11 16:18:35.570: INFO: (17) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 4.39422ms) May 11 16:18:35.570: INFO: (17) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 4.763939ms) May 11 16:18:35.570: INFO: (17) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 4.719031ms) May 11 16:18:35.571: INFO: (17) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 4.911236ms) May 11 16:18:35.571: INFO: (17) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 4.897198ms) May 11 16:18:35.571: INFO: (17) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 5.018865ms) May 11 16:18:35.571: INFO: (17) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 5.285307ms) May 11 16:18:35.571: INFO: (17) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname2/proxy/: tls qux (200; 5.467002ms) May 11 16:18:35.573: INFO: (18) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 1.958693ms) May 11 16:18:35.574: INFO: (18) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 3.16701ms) May 11 16:18:35.574: INFO: (18) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 3.202964ms) May 11 16:18:35.574: INFO: (18) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 3.190263ms) May 11 16:18:35.575: INFO: (18) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 3.363902ms) May 11 16:18:35.575: INFO: (18) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 3.462364ms) May 11 16:18:35.575: INFO: (18) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:1080/proxy/: test<... (200; 3.402501ms) May 11 16:18:35.575: INFO: (18) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g/proxy/: test (200; 3.389196ms) May 11 16:18:35.575: INFO: (18) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:462/proxy/: tls qux (200; 3.703173ms) May 11 16:18:35.576: INFO: (18) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 4.379702ms) May 11 16:18:35.576: INFO: (18) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname2/proxy/: bar (200; 4.424269ms) May 11 16:18:35.576: INFO: (18) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test<... (200; 4.181288ms) May 11 16:18:35.580: INFO: (19) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:443/proxy/: test (200; 4.220069ms) May 11 16:18:35.580: INFO: (19) /api/v1/namespaces/proxy-9912/pods/https:proxy-service-7kn5k-ft98g:460/proxy/: tls baz (200; 4.176745ms) May 11 16:18:35.580: INFO: (19) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:160/proxy/: foo (200; 4.223332ms) May 11 16:18:35.580: INFO: (19) /api/v1/namespaces/proxy-9912/pods/proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.169102ms) May 11 16:18:35.580: INFO: (19) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:1080/proxy/: ... (200; 4.205659ms) May 11 16:18:35.580: INFO: (19) /api/v1/namespaces/proxy-9912/pods/http:proxy-service-7kn5k-ft98g:162/proxy/: bar (200; 4.409405ms) May 11 16:18:35.581: INFO: (19) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname2/proxy/: tls qux (200; 4.6806ms) May 11 16:18:35.581: INFO: (19) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname1/proxy/: foo (200; 4.732435ms) May 11 16:18:35.581: INFO: (19) /api/v1/namespaces/proxy-9912/services/http:proxy-service-7kn5k:portname2/proxy/: bar (200; 4.703736ms) May 11 16:18:35.581: INFO: (19) /api/v1/namespaces/proxy-9912/services/proxy-service-7kn5k:portname1/proxy/: foo (200; 4.710128ms) May 11 16:18:35.581: INFO: (19) /api/v1/namespaces/proxy-9912/services/https:proxy-service-7kn5k:tlsportname1/proxy/: tls baz (200; 4.729099ms) STEP: deleting ReplicationController proxy-service-7kn5k in namespace proxy-9912, will wait for the garbage collector to delete the pods May 11 16:18:35.640: INFO: Deleting ReplicationController proxy-service-7kn5k took: 7.134802ms May 11 16:18:36.440: INFO: Terminating ReplicationController proxy-service-7kn5k pods took: 800.260698ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:18:40.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9912" for this suite. • [SLOW TEST:17.835 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":148,"skipped":2457,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:18:40.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 16:18:40.941: INFO: Waiting up to 5m0s for pod "pod-48f59e79-d7fb-4ff2-a64e-c5e599954f43" in namespace "emptydir-2909" to be "success or failure" May 11 16:18:40.971: INFO: Pod "pod-48f59e79-d7fb-4ff2-a64e-c5e599954f43": Phase="Pending", Reason="", readiness=false. Elapsed: 29.829874ms May 11 16:18:42.975: INFO: Pod "pod-48f59e79-d7fb-4ff2-a64e-c5e599954f43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033499853s May 11 16:18:45.150: INFO: Pod "pod-48f59e79-d7fb-4ff2-a64e-c5e599954f43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208497222s May 11 16:18:47.403: INFO: Pod "pod-48f59e79-d7fb-4ff2-a64e-c5e599954f43": Phase="Running", Reason="", readiness=true. Elapsed: 6.461672591s May 11 16:18:49.600: INFO: Pod "pod-48f59e79-d7fb-4ff2-a64e-c5e599954f43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.658900022s STEP: Saw pod success May 11 16:18:49.600: INFO: Pod "pod-48f59e79-d7fb-4ff2-a64e-c5e599954f43" satisfied condition "success or failure" May 11 16:18:49.602: INFO: Trying to get logs from node jerma-worker2 pod pod-48f59e79-d7fb-4ff2-a64e-c5e599954f43 container test-container: STEP: delete the pod May 11 16:18:50.279: INFO: Waiting for pod pod-48f59e79-d7fb-4ff2-a64e-c5e599954f43 to disappear May 11 16:18:50.421: INFO: Pod pod-48f59e79-d7fb-4ff2-a64e-c5e599954f43 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:18:50.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2909" for this suite. • [SLOW TEST:9.996 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:18:50.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 11 16:18:51.202: INFO: Waiting up to 5m0s for pod "client-containers-b32a3698-30ad-4226-8cdb-3de098e3a375" in namespace "containers-7329" to be "success or failure" May 11 16:18:51.224: INFO: Pod "client-containers-b32a3698-30ad-4226-8cdb-3de098e3a375": Phase="Pending", Reason="", readiness=false. Elapsed: 22.634099ms May 11 16:18:53.228: INFO: Pod "client-containers-b32a3698-30ad-4226-8cdb-3de098e3a375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025820154s May 11 16:18:55.232: INFO: Pod "client-containers-b32a3698-30ad-4226-8cdb-3de098e3a375": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029747081s May 11 16:18:57.247: INFO: Pod "client-containers-b32a3698-30ad-4226-8cdb-3de098e3a375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045596048s STEP: Saw pod success May 11 16:18:57.247: INFO: Pod "client-containers-b32a3698-30ad-4226-8cdb-3de098e3a375" satisfied condition "success or failure" May 11 16:18:57.250: INFO: Trying to get logs from node jerma-worker2 pod client-containers-b32a3698-30ad-4226-8cdb-3de098e3a375 container test-container: STEP: delete the pod May 11 16:18:57.980: INFO: Waiting for pod client-containers-b32a3698-30ad-4226-8cdb-3de098e3a375 to disappear May 11 16:18:58.027: INFO: Pod client-containers-b32a3698-30ad-4226-8cdb-3de098e3a375 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:18:58.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7329" for this suite. • [SLOW TEST:7.575 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2527,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:18:58.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:18:58.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6770" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":151,"skipped":2545,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:18:58.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-ace9bc11-7953-4dd9-a3fc-94f3ebf45136 STEP: Creating a pod to test consume secrets May 11 16:18:58.514: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3be4517f-2dbe-4e2c-9bab-dce89df1a55a" in namespace "projected-4468" to be "success or failure" May 11 16:18:58.552: INFO: Pod "pod-projected-secrets-3be4517f-2dbe-4e2c-9bab-dce89df1a55a": Phase="Pending", Reason="", readiness=false. Elapsed: 37.662749ms May 11 16:19:00.577: INFO: Pod "pod-projected-secrets-3be4517f-2dbe-4e2c-9bab-dce89df1a55a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062364683s May 11 16:19:02.581: INFO: Pod "pod-projected-secrets-3be4517f-2dbe-4e2c-9bab-dce89df1a55a": Phase="Running", Reason="", readiness=true. Elapsed: 4.066933208s May 11 16:19:04.600: INFO: Pod "pod-projected-secrets-3be4517f-2dbe-4e2c-9bab-dce89df1a55a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085922272s STEP: Saw pod success May 11 16:19:04.600: INFO: Pod "pod-projected-secrets-3be4517f-2dbe-4e2c-9bab-dce89df1a55a" satisfied condition "success or failure" May 11 16:19:04.603: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3be4517f-2dbe-4e2c-9bab-dce89df1a55a container projected-secret-volume-test: STEP: delete the pod May 11 16:19:04.642: INFO: Waiting for pod pod-projected-secrets-3be4517f-2dbe-4e2c-9bab-dce89df1a55a to disappear May 11 16:19:04.841: INFO: Pod pod-projected-secrets-3be4517f-2dbe-4e2c-9bab-dce89df1a55a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:19:04.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4468" for this suite. • [SLOW TEST:6.517 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:19:04.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 11 16:19:04.992: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7058 /api/v1/namespaces/watch-7058/configmaps/e2e-watch-test-resource-version 9c46036b-d551-46c0-a807-c7ba5feb5ed5 15279554 0 2020-05-11 16:19:04 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 16:19:04.992: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7058 /api/v1/namespaces/watch-7058/configmaps/e2e-watch-test-resource-version 9c46036b-d551-46c0-a807-c7ba5feb5ed5 15279555 0 2020-05-11 16:19:04 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:19:04.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7058" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":153,"skipped":2623,"failed":0} ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:19:04.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2344.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2344.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2344.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2344.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2344.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2344.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 16:19:13.341: INFO: DNS probes using dns-2344/dns-test-dd336d52-986c-4725-b205-6d2d51bddc0f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:19:13.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2344" for this suite. • [SLOW TEST:8.510 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":154,"skipped":2623,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:19:13.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:19:14.215: INFO: Waiting up to 5m0s for pod "busybox-user-65534-76afbe04-1cb6-4c03-9805-917b9729ba28" in namespace "security-context-test-8255" to be "success or failure" May 11 16:19:14.243: INFO: Pod "busybox-user-65534-76afbe04-1cb6-4c03-9805-917b9729ba28": Phase="Pending", Reason="", readiness=false. Elapsed: 27.877444ms May 11 16:19:16.247: INFO: Pod "busybox-user-65534-76afbe04-1cb6-4c03-9805-917b9729ba28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03183226s May 11 16:19:18.252: INFO: Pod "busybox-user-65534-76afbe04-1cb6-4c03-9805-917b9729ba28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03623313s May 11 16:19:20.344: INFO: Pod "busybox-user-65534-76afbe04-1cb6-4c03-9805-917b9729ba28": Phase="Running", Reason="", readiness=true. Elapsed: 6.128244555s May 11 16:19:22.451: INFO: Pod "busybox-user-65534-76afbe04-1cb6-4c03-9805-917b9729ba28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.236021119s May 11 16:19:22.451: INFO: Pod "busybox-user-65534-76afbe04-1cb6-4c03-9805-917b9729ba28" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:19:22.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8255" for this suite. • [SLOW TEST:9.198 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:19:22.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:19:36.613: INFO: Waiting up to 5m0s for pod "client-envvars-2de719aa-050a-4467-b6bc-9dfeb363607a" in namespace "pods-1314" to be "success or failure" May 11 16:19:36.631: INFO: Pod "client-envvars-2de719aa-050a-4467-b6bc-9dfeb363607a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.662183ms May 11 16:19:38.635: INFO: Pod "client-envvars-2de719aa-050a-4467-b6bc-9dfeb363607a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02192453s May 11 16:19:40.639: INFO: Pod "client-envvars-2de719aa-050a-4467-b6bc-9dfeb363607a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026441918s May 11 16:19:42.643: INFO: Pod "client-envvars-2de719aa-050a-4467-b6bc-9dfeb363607a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030101489s STEP: Saw pod success May 11 16:19:42.643: INFO: Pod "client-envvars-2de719aa-050a-4467-b6bc-9dfeb363607a" satisfied condition "success or failure" May 11 16:19:42.645: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-2de719aa-050a-4467-b6bc-9dfeb363607a container env3cont: STEP: delete the pod May 11 16:19:42.723: INFO: Waiting for pod client-envvars-2de719aa-050a-4467-b6bc-9dfeb363607a to disappear May 11 16:19:42.726: INFO: Pod client-envvars-2de719aa-050a-4467-b6bc-9dfeb363607a no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:19:42.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1314" for this suite. • [SLOW TEST:20.026 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2659,"failed":0} S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:19:42.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:19:46.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1586" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2660,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:19:46.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 11 16:19:47.051: INFO: Waiting up to 5m0s for pod "client-containers-1a9151f2-4476-4ddc-8561-e72c7ab78320" in namespace "containers-914" to be "success or failure" May 11 16:19:47.070: INFO: Pod "client-containers-1a9151f2-4476-4ddc-8561-e72c7ab78320": Phase="Pending", Reason="", readiness=false. Elapsed: 19.853542ms May 11 16:19:49.159: INFO: Pod "client-containers-1a9151f2-4476-4ddc-8561-e72c7ab78320": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108556918s May 11 16:19:51.182: INFO: Pod "client-containers-1a9151f2-4476-4ddc-8561-e72c7ab78320": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131628988s May 11 16:19:53.405: INFO: Pod "client-containers-1a9151f2-4476-4ddc-8561-e72c7ab78320": Phase="Running", Reason="", readiness=true. Elapsed: 6.354537548s May 11 16:19:55.408: INFO: Pod "client-containers-1a9151f2-4476-4ddc-8561-e72c7ab78320": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.35695643s STEP: Saw pod success May 11 16:19:55.408: INFO: Pod "client-containers-1a9151f2-4476-4ddc-8561-e72c7ab78320" satisfied condition "success or failure" May 11 16:19:55.410: INFO: Trying to get logs from node jerma-worker pod client-containers-1a9151f2-4476-4ddc-8561-e72c7ab78320 container test-container: STEP: delete the pod May 11 16:19:55.482: INFO: Waiting for pod client-containers-1a9151f2-4476-4ddc-8561-e72c7ab78320 to disappear May 11 16:19:55.490: INFO: Pod client-containers-1a9151f2-4476-4ddc-8561-e72c7ab78320 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:19:55.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-914" for this suite. • [SLOW TEST:8.586 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2666,"failed":0} SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:19:55.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:19:55.536: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 11 16:19:55.550: INFO: Pod name sample-pod: Found 0 pods out of 1 May 11 16:20:00.553: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 16:20:00.553: INFO: Creating deployment "test-rolling-update-deployment" May 11 16:20:00.564: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 11 16:20:00.586: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 11 16:20:02.750: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 11 16:20:02.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:20:04.756: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 11 16:20:04.763: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3260 /apis/apps/v1/namespaces/deployment-3260/deployments/test-rolling-update-deployment d317e6ae-6149-456b-ace5-c273b3332b34 15279939 1 2020-05-11 16:20:00 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032a5ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 16:20:00 +0000 UTC,LastTransitionTime:2020-05-11 16:20:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-11 16:20:04 +0000 UTC,LastTransitionTime:2020-05-11 16:20:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 11 16:20:04.766: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-3260 /apis/apps/v1/namespaces/deployment-3260/replicasets/test-rolling-update-deployment-67cf4f6444 e3c77c0c-8295-47a1-ac48-560c07226612 15279928 1 2020-05-11 16:20:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment d317e6ae-6149-456b-ace5-c273b3332b34 0xc0030cfc27 0xc0030cfc28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030cfdb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 16:20:04.766: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 11 16:20:04.766: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3260 /apis/apps/v1/namespaces/deployment-3260/replicasets/test-rolling-update-controller 4b3fb057-39f6-4a9d-ba66-486373203681 15279937 2 2020-05-11 16:19:55 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment d317e6ae-6149-456b-ace5-c273b3332b34 0xc0030cf647 0xc0030cf648}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0030cfaa8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 16:20:04.770: INFO: Pod "test-rolling-update-deployment-67cf4f6444-sqttk" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-sqttk test-rolling-update-deployment-67cf4f6444- deployment-3260 /api/v1/namespaces/deployment-3260/pods/test-rolling-update-deployment-67cf4f6444-sqttk 7d35208a-d372-497e-8d3f-f9550857bc30 15279927 0 2020-05-11 16:20:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 e3c77c0c-8295-47a1-ac48-560c07226612 0xc000054f97 0xc000054f98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tsxsh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tsxsh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tsxsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:20:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:20:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:20:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:20:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.220,StartTime:2020-05-11 16:20:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:20:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://79f46317aa882fb89ab886f621556333cd92769be88ab96f2bd653a7e15bdc73,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.220,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:20:04.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3260" for this suite. • [SLOW TEST:9.280 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":159,"skipped":2668,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:20:04.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 11 16:20:05.493: INFO: Waiting up to 5m0s for pod "downward-api-318c5dcd-e275-45ce-baea-dde207f8622c" in namespace "downward-api-9256" to be "success or failure" May 11 16:20:05.538: INFO: Pod "downward-api-318c5dcd-e275-45ce-baea-dde207f8622c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.049982ms May 11 16:20:07.625: INFO: Pod "downward-api-318c5dcd-e275-45ce-baea-dde207f8622c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132146514s May 11 16:20:09.630: INFO: Pod "downward-api-318c5dcd-e275-45ce-baea-dde207f8622c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137277459s May 11 16:20:11.823: INFO: Pod "downward-api-318c5dcd-e275-45ce-baea-dde207f8622c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.330209418s STEP: Saw pod success May 11 16:20:11.823: INFO: Pod "downward-api-318c5dcd-e275-45ce-baea-dde207f8622c" satisfied condition "success or failure" May 11 16:20:11.828: INFO: Trying to get logs from node jerma-worker2 pod downward-api-318c5dcd-e275-45ce-baea-dde207f8622c container dapi-container: STEP: delete the pod May 11 16:20:12.386: INFO: Waiting for pod downward-api-318c5dcd-e275-45ce-baea-dde207f8622c to disappear May 11 16:20:12.530: INFO: Pod downward-api-318c5dcd-e275-45ce-baea-dde207f8622c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:20:12.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9256" for this suite. • [SLOW TEST:7.899 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2678,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:20:12.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 11 16:20:13.468: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 11 16:20:15.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:20:17.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 16:20:20.856: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:20:21.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:20:23.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3862" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:11.273 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":161,"skipped":2698,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:20:23.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 11 16:20:30.593: INFO: Successfully updated pod "labelsupdate0e2e21de-2a05-45ec-99f7-550d1bf55eb4" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:20:32.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-568" for this suite. • [SLOW TEST:8.879 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2700,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:20:32.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6277 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6277 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6277 May 11 16:20:33.600: INFO: Found 0 stateful pods, waiting for 1 May 11 16:20:43.795: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 11 16:20:43.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6277 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 16:20:45.311: INFO: stderr: "I0511 16:20:45.067034 2563 log.go:172] (0xc000a88790) (0xc0008ec000) Create stream\nI0511 16:20:45.067091 2563 log.go:172] (0xc000a88790) (0xc0008ec000) Stream added, broadcasting: 1\nI0511 16:20:45.069020 2563 log.go:172] (0xc000a88790) Reply frame received for 1\nI0511 16:20:45.069055 2563 log.go:172] (0xc000a88790) (0xc0009fe000) Create stream\nI0511 16:20:45.069070 2563 log.go:172] (0xc000a88790) (0xc0009fe000) Stream added, broadcasting: 3\nI0511 16:20:45.071448 2563 log.go:172] (0xc000a88790) Reply frame received for 3\nI0511 16:20:45.071481 2563 log.go:172] (0xc000a88790) (0xc0006a9c20) Create stream\nI0511 16:20:45.071492 2563 log.go:172] (0xc000a88790) (0xc0006a9c20) Stream added, broadcasting: 5\nI0511 16:20:45.072119 2563 log.go:172] (0xc000a88790) Reply frame received for 5\nI0511 16:20:45.119503 2563 log.go:172] (0xc000a88790) Data frame received for 5\nI0511 16:20:45.119525 2563 log.go:172] (0xc0006a9c20) (5) Data frame handling\nI0511 16:20:45.119539 2563 log.go:172] (0xc0006a9c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 16:20:45.303947 2563 log.go:172] (0xc000a88790) Data frame received for 3\nI0511 16:20:45.303990 2563 log.go:172] (0xc0009fe000) (3) Data frame handling\nI0511 16:20:45.304005 2563 log.go:172] (0xc0009fe000) (3) Data frame sent\nI0511 16:20:45.304020 2563 log.go:172] (0xc000a88790) Data frame received for 3\nI0511 16:20:45.304039 2563 log.go:172] (0xc0009fe000) (3) Data frame handling\nI0511 16:20:45.304127 2563 log.go:172] (0xc000a88790) Data frame received for 5\nI0511 16:20:45.304172 2563 log.go:172] (0xc0006a9c20) (5) Data frame handling\nI0511 16:20:45.306140 2563 log.go:172] (0xc000a88790) Data frame received for 1\nI0511 16:20:45.306171 2563 log.go:172] (0xc0008ec000) (1) Data frame handling\nI0511 16:20:45.306197 2563 log.go:172] (0xc0008ec000) (1) Data frame sent\nI0511 16:20:45.306218 2563 log.go:172] (0xc000a88790) (0xc0008ec000) Stream removed, broadcasting: 1\nI0511 16:20:45.306452 2563 log.go:172] (0xc000a88790) Go away received\nI0511 16:20:45.306822 2563 log.go:172] (0xc000a88790) (0xc0008ec000) Stream removed, broadcasting: 1\nI0511 16:20:45.306861 2563 log.go:172] (0xc000a88790) (0xc0009fe000) Stream removed, broadcasting: 3\nI0511 16:20:45.306882 2563 log.go:172] (0xc000a88790) (0xc0006a9c20) Stream removed, broadcasting: 5\n" May 11 16:20:45.311: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 16:20:45.311: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 16:20:45.326: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 16:20:55.331: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 16:20:55.331: INFO: Waiting for statefulset status.replicas updated to 0 May 11 16:20:55.345: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999662s May 11 16:20:56.348: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994742832s May 11 16:20:57.356: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991231031s May 11 16:20:58.440: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983390998s May 11 16:20:59.459: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.899379945s May 11 16:21:00.512: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.880745365s May 11 16:21:01.524: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.827298395s May 11 16:21:02.560: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.815450418s May 11 16:21:03.564: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.779562543s May 11 16:21:04.569: INFO: Verifying statefulset ss doesn't scale past 1 for another 775.216899ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6277 May 11 16:21:05.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6277 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 16:21:05.767: INFO: stderr: "I0511 16:21:05.708361 2582 log.go:172] (0xc00092a790) (0xc0008c2140) Create stream\nI0511 16:21:05.708406 2582 log.go:172] (0xc00092a790) (0xc0008c2140) Stream added, broadcasting: 1\nI0511 16:21:05.710268 2582 log.go:172] (0xc00092a790) Reply frame received for 1\nI0511 16:21:05.710354 2582 log.go:172] (0xc00092a790) (0xc00078a820) Create stream\nI0511 16:21:05.710373 2582 log.go:172] (0xc00092a790) (0xc00078a820) Stream added, broadcasting: 3\nI0511 16:21:05.711265 2582 log.go:172] (0xc00092a790) Reply frame received for 3\nI0511 16:21:05.711298 2582 log.go:172] (0xc00092a790) (0xc000625ae0) Create stream\nI0511 16:21:05.711312 2582 log.go:172] (0xc00092a790) (0xc000625ae0) Stream added, broadcasting: 5\nI0511 16:21:05.711869 2582 log.go:172] (0xc00092a790) Reply frame received for 5\nI0511 16:21:05.760925 2582 log.go:172] (0xc00092a790) Data frame received for 3\nI0511 16:21:05.760944 2582 log.go:172] (0xc00078a820) (3) Data frame handling\nI0511 16:21:05.760960 2582 log.go:172] (0xc00078a820) (3) Data frame sent\nI0511 16:21:05.760969 2582 log.go:172] (0xc00092a790) Data frame received for 3\nI0511 16:21:05.760980 2582 log.go:172] (0xc00078a820) (3) Data frame handling\nI0511 16:21:05.761004 2582 log.go:172] (0xc00092a790) Data frame received for 5\nI0511 16:21:05.761024 2582 log.go:172] (0xc000625ae0) (5) Data frame handling\nI0511 16:21:05.761047 2582 log.go:172] (0xc000625ae0) (5) Data frame sent\nI0511 16:21:05.761056 2582 log.go:172] (0xc00092a790) Data frame received for 5\nI0511 16:21:05.761061 2582 log.go:172] (0xc000625ae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 16:21:05.762568 2582 log.go:172] (0xc00092a790) Data frame received for 1\nI0511 16:21:05.762589 2582 log.go:172] (0xc0008c2140) (1) Data frame handling\nI0511 16:21:05.762603 2582 log.go:172] (0xc0008c2140) (1) Data frame sent\nI0511 16:21:05.762654 2582 log.go:172] (0xc00092a790) (0xc0008c2140) Stream removed, broadcasting: 1\nI0511 16:21:05.762672 2582 log.go:172] (0xc00092a790) Go away received\nI0511 16:21:05.762994 2582 log.go:172] (0xc00092a790) (0xc0008c2140) Stream removed, broadcasting: 1\nI0511 16:21:05.763016 2582 log.go:172] (0xc00092a790) (0xc00078a820) Stream removed, broadcasting: 3\nI0511 16:21:05.763031 2582 log.go:172] (0xc00092a790) (0xc000625ae0) Stream removed, broadcasting: 5\n" May 11 16:21:05.767: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 16:21:05.767: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 16:21:05.770: INFO: Found 1 stateful pods, waiting for 3 May 11 16:21:15.872: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 16:21:15.872: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 16:21:15.872: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 11 16:21:15.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6277 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 16:21:16.356: INFO: stderr: "I0511 16:21:16.202222 2602 log.go:172] (0xc0009a02c0) (0xc0007b6000) Create stream\nI0511 16:21:16.202303 2602 log.go:172] (0xc0009a02c0) (0xc0007b6000) Stream added, broadcasting: 1\nI0511 16:21:16.204786 2602 log.go:172] (0xc0009a02c0) Reply frame received for 1\nI0511 16:21:16.204833 2602 log.go:172] (0xc0009a02c0) (0xc0007f8000) Create stream\nI0511 16:21:16.204849 2602 log.go:172] (0xc0009a02c0) (0xc0007f8000) Stream added, broadcasting: 3\nI0511 16:21:16.205768 2602 log.go:172] (0xc0009a02c0) Reply frame received for 3\nI0511 16:21:16.205813 2602 log.go:172] (0xc0009a02c0) (0xc0007b60a0) Create stream\nI0511 16:21:16.205830 2602 log.go:172] (0xc0009a02c0) (0xc0007b60a0) Stream added, broadcasting: 5\nI0511 16:21:16.206485 2602 log.go:172] (0xc0009a02c0) Reply frame received for 5\nI0511 16:21:16.271734 2602 log.go:172] (0xc0009a02c0) Data frame received for 5\nI0511 16:21:16.271769 2602 log.go:172] (0xc0007b60a0) (5) Data frame handling\nI0511 16:21:16.271784 2602 log.go:172] (0xc0007b60a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 16:21:16.347754 2602 log.go:172] (0xc0009a02c0) Data frame received for 3\nI0511 16:21:16.347783 2602 log.go:172] (0xc0007f8000) (3) Data frame handling\nI0511 16:21:16.347797 2602 log.go:172] (0xc0007f8000) (3) Data frame sent\nI0511 16:21:16.350452 2602 log.go:172] (0xc0009a02c0) Data frame received for 5\nI0511 16:21:16.350481 2602 log.go:172] (0xc0007b60a0) (5) Data frame handling\nI0511 16:21:16.350506 2602 log.go:172] (0xc0009a02c0) Data frame received for 1\nI0511 16:21:16.350519 2602 log.go:172] (0xc0007b6000) (1) Data frame handling\nI0511 16:21:16.350533 2602 log.go:172] (0xc0007b6000) (1) Data frame sent\nI0511 16:21:16.350875 2602 log.go:172] (0xc0009a02c0) Data frame received for 3\nI0511 16:21:16.351069 2602 log.go:172] (0xc0009a02c0) (0xc0007b6000) Stream removed, broadcasting: 1\nI0511 16:21:16.351634 2602 log.go:172] (0xc0007f8000) (3) Data frame handling\nI0511 16:21:16.351666 2602 log.go:172] (0xc0009a02c0) Go away received\nI0511 16:21:16.351948 2602 log.go:172] (0xc0009a02c0) (0xc0007b6000) Stream removed, broadcasting: 1\nI0511 16:21:16.351970 2602 log.go:172] (0xc0009a02c0) (0xc0007f8000) Stream removed, broadcasting: 3\nI0511 16:21:16.351983 2602 log.go:172] (0xc0009a02c0) (0xc0007b60a0) Stream removed, broadcasting: 5\n" May 11 16:21:16.357: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 16:21:16.357: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 16:21:16.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6277 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 16:21:17.698: INFO: stderr: "I0511 16:21:17.271533 2620 log.go:172] (0xc000b3bc30) (0xc000ac45a0) Create stream\nI0511 16:21:17.271602 2620 log.go:172] (0xc000b3bc30) (0xc000ac45a0) Stream added, broadcasting: 1\nI0511 16:21:17.273922 2620 log.go:172] (0xc000b3bc30) Reply frame received for 1\nI0511 16:21:17.273991 2620 log.go:172] (0xc000b3bc30) (0xc000b2e820) Create stream\nI0511 16:21:17.274020 2620 log.go:172] (0xc000b3bc30) (0xc000b2e820) Stream added, broadcasting: 3\nI0511 16:21:17.274907 2620 log.go:172] (0xc000b3bc30) Reply frame received for 3\nI0511 16:21:17.274937 2620 log.go:172] (0xc000b3bc30) (0xc000b2e8c0) Create stream\nI0511 16:21:17.274951 2620 log.go:172] (0xc000b3bc30) (0xc000b2e8c0) Stream added, broadcasting: 5\nI0511 16:21:17.275684 2620 log.go:172] (0xc000b3bc30) Reply frame received for 5\nI0511 16:21:17.332738 2620 log.go:172] (0xc000b3bc30) Data frame received for 5\nI0511 16:21:17.332788 2620 log.go:172] (0xc000b2e8c0) (5) Data frame handling\nI0511 16:21:17.332820 2620 log.go:172] (0xc000b2e8c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 16:21:17.684510 2620 log.go:172] (0xc000b3bc30) Data frame received for 3\nI0511 16:21:17.684535 2620 log.go:172] (0xc000b2e820) (3) Data frame handling\nI0511 16:21:17.684547 2620 log.go:172] (0xc000b2e820) (3) Data frame sent\nI0511 16:21:17.684554 2620 log.go:172] (0xc000b3bc30) Data frame received for 3\nI0511 16:21:17.684561 2620 log.go:172] (0xc000b2e820) (3) Data frame handling\nI0511 16:21:17.685023 2620 log.go:172] (0xc000b3bc30) Data frame received for 5\nI0511 16:21:17.685043 2620 log.go:172] (0xc000b2e8c0) (5) Data frame handling\nI0511 16:21:17.692084 2620 log.go:172] (0xc000b3bc30) Data frame received for 1\nI0511 16:21:17.692121 2620 log.go:172] (0xc000ac45a0) (1) Data frame handling\nI0511 16:21:17.692137 2620 log.go:172] (0xc000ac45a0) (1) Data frame sent\nI0511 16:21:17.692155 2620 log.go:172] (0xc000b3bc30) (0xc000ac45a0) Stream removed, broadcasting: 1\nI0511 16:21:17.692171 2620 log.go:172] (0xc000b3bc30) Go away received\nI0511 16:21:17.692773 2620 log.go:172] (0xc000b3bc30) (0xc000ac45a0) Stream removed, broadcasting: 1\nI0511 16:21:17.692799 2620 log.go:172] (0xc000b3bc30) (0xc000b2e820) Stream removed, broadcasting: 3\nI0511 16:21:17.692813 2620 log.go:172] (0xc000b3bc30) (0xc000b2e8c0) Stream removed, broadcasting: 5\n" May 11 16:21:17.698: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 16:21:17.698: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 16:21:17.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6277 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 16:21:18.621: INFO: stderr: "I0511 16:21:18.246674 2641 log.go:172] (0xc0007f0a50) (0xc00054c8c0) Create stream\nI0511 16:21:18.246730 2641 log.go:172] (0xc0007f0a50) (0xc00054c8c0) Stream added, broadcasting: 1\nI0511 16:21:18.248957 2641 log.go:172] (0xc0007f0a50) Reply frame received for 1\nI0511 16:21:18.248999 2641 log.go:172] (0xc0007f0a50) (0xc00001b680) Create stream\nI0511 16:21:18.249009 2641 log.go:172] (0xc0007f0a50) (0xc00001b680) Stream added, broadcasting: 3\nI0511 16:21:18.250066 2641 log.go:172] (0xc0007f0a50) Reply frame received for 3\nI0511 16:21:18.250092 2641 log.go:172] (0xc0007f0a50) (0xc0007bc000) Create stream\nI0511 16:21:18.250099 2641 log.go:172] (0xc0007f0a50) (0xc0007bc000) Stream added, broadcasting: 5\nI0511 16:21:18.250775 2641 log.go:172] (0xc0007f0a50) Reply frame received for 5\nI0511 16:21:18.303539 2641 log.go:172] (0xc0007f0a50) Data frame received for 5\nI0511 16:21:18.303563 2641 log.go:172] (0xc0007bc000) (5) Data frame handling\nI0511 16:21:18.303577 2641 log.go:172] (0xc0007bc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 16:21:18.610540 2641 log.go:172] (0xc0007f0a50) Data frame received for 3\nI0511 16:21:18.610602 2641 log.go:172] (0xc00001b680) (3) Data frame handling\nI0511 16:21:18.610641 2641 log.go:172] (0xc00001b680) (3) Data frame sent\nI0511 16:21:18.610675 2641 log.go:172] (0xc0007f0a50) Data frame received for 3\nI0511 16:21:18.610694 2641 log.go:172] (0xc00001b680) (3) Data frame handling\nI0511 16:21:18.610727 2641 log.go:172] (0xc0007f0a50) Data frame received for 5\nI0511 16:21:18.610763 2641 log.go:172] (0xc0007bc000) (5) Data frame handling\nI0511 16:21:18.613441 2641 log.go:172] (0xc0007f0a50) Data frame received for 1\nI0511 16:21:18.613496 2641 log.go:172] (0xc00054c8c0) (1) Data frame handling\nI0511 16:21:18.613535 2641 log.go:172] (0xc00054c8c0) (1) Data frame sent\nI0511 16:21:18.613591 2641 log.go:172] (0xc0007f0a50) (0xc00054c8c0) Stream removed, broadcasting: 1\nI0511 16:21:18.613685 2641 log.go:172] (0xc0007f0a50) Go away received\nI0511 16:21:18.614146 2641 log.go:172] (0xc0007f0a50) (0xc00054c8c0) Stream removed, broadcasting: 1\nI0511 16:21:18.614171 2641 log.go:172] (0xc0007f0a50) (0xc00001b680) Stream removed, broadcasting: 3\nI0511 16:21:18.614183 2641 log.go:172] (0xc0007f0a50) (0xc0007bc000) Stream removed, broadcasting: 5\n" May 11 16:21:18.621: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 16:21:18.621: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 16:21:18.621: INFO: Waiting for statefulset status.replicas updated to 0 May 11 16:21:18.800: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 11 16:21:29.516: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 16:21:29.517: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 16:21:29.517: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 16:21:30.362: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999505s May 11 16:21:31.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.946216422s May 11 16:21:32.477: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.873813071s May 11 16:21:33.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.831135642s May 11 16:21:34.955: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.357039438s May 11 16:21:35.960: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.35320586s May 11 16:21:36.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.348482027s May 11 16:21:37.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.34402693s May 11 16:21:38.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.339349597s May 11 16:21:39.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 334.783107ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6277 May 11 16:21:41.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6277 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 16:21:41.348: INFO: stderr: "I0511 16:21:41.287198 2661 log.go:172] (0xc000aa6e70) (0xc0007e61e0) Create stream\nI0511 16:21:41.287240 2661 log.go:172] (0xc000aa6e70) (0xc0007e61e0) Stream added, broadcasting: 1\nI0511 16:21:41.289480 2661 log.go:172] (0xc000aa6e70) Reply frame received for 1\nI0511 16:21:41.289518 2661 log.go:172] (0xc000aa6e70) (0xc0007e6280) Create stream\nI0511 16:21:41.289529 2661 log.go:172] (0xc000aa6e70) (0xc0007e6280) Stream added, broadcasting: 3\nI0511 16:21:41.290376 2661 log.go:172] (0xc000aa6e70) Reply frame received for 3\nI0511 16:21:41.290412 2661 log.go:172] (0xc000aa6e70) (0xc00084a000) Create stream\nI0511 16:21:41.290432 2661 log.go:172] (0xc000aa6e70) (0xc00084a000) Stream added, broadcasting: 5\nI0511 16:21:41.291197 2661 log.go:172] (0xc000aa6e70) Reply frame received for 5\nI0511 16:21:41.342269 2661 log.go:172] (0xc000aa6e70) Data frame received for 3\nI0511 16:21:41.342296 2661 log.go:172] (0xc0007e6280) (3) Data frame handling\nI0511 16:21:41.342307 2661 log.go:172] (0xc0007e6280) (3) Data frame sent\nI0511 16:21:41.342355 2661 log.go:172] (0xc000aa6e70) Data frame received for 5\nI0511 16:21:41.342376 2661 log.go:172] (0xc00084a000) (5) Data frame handling\nI0511 16:21:41.342386 2661 log.go:172] (0xc00084a000) (5) Data frame sent\nI0511 16:21:41.342398 2661 log.go:172] (0xc000aa6e70) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 16:21:41.342410 2661 log.go:172] (0xc00084a000) (5) Data frame handling\nI0511 16:21:41.342438 2661 log.go:172] (0xc000aa6e70) Data frame received for 3\nI0511 16:21:41.342459 2661 log.go:172] (0xc0007e6280) (3) Data frame handling\nI0511 16:21:41.343415 2661 log.go:172] (0xc000aa6e70) Data frame received for 1\nI0511 16:21:41.343441 2661 log.go:172] (0xc0007e61e0) (1) Data frame handling\nI0511 16:21:41.343454 2661 log.go:172] (0xc0007e61e0) (1) Data frame sent\nI0511 16:21:41.343467 2661 log.go:172] (0xc000aa6e70) (0xc0007e61e0) Stream removed, broadcasting: 1\nI0511 16:21:41.343490 2661 log.go:172] (0xc000aa6e70) Go away received\nI0511 16:21:41.343861 2661 log.go:172] (0xc000aa6e70) (0xc0007e61e0) Stream removed, broadcasting: 1\nI0511 16:21:41.343889 2661 log.go:172] (0xc000aa6e70) (0xc0007e6280) Stream removed, broadcasting: 3\nI0511 16:21:41.343905 2661 log.go:172] (0xc000aa6e70) (0xc00084a000) Stream removed, broadcasting: 5\n" May 11 16:21:41.348: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 16:21:41.348: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 16:21:41.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6277 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 16:21:41.641: INFO: stderr: "I0511 16:21:41.573007 2680 log.go:172] (0xc000b46000) (0xc00023b400) Create stream\nI0511 16:21:41.573073 2680 log.go:172] (0xc000b46000) (0xc00023b400) Stream added, broadcasting: 1\nI0511 16:21:41.576294 2680 log.go:172] (0xc000b46000) Reply frame received for 1\nI0511 16:21:41.576328 2680 log.go:172] (0xc000b46000) (0xc000aee000) Create stream\nI0511 16:21:41.576341 2680 log.go:172] (0xc000b46000) (0xc000aee000) Stream added, broadcasting: 3\nI0511 16:21:41.577400 2680 log.go:172] (0xc000b46000) Reply frame received for 3\nI0511 16:21:41.577438 2680 log.go:172] (0xc000b46000) (0xc0007e2000) Create stream\nI0511 16:21:41.577449 2680 log.go:172] (0xc000b46000) (0xc0007e2000) Stream added, broadcasting: 5\nI0511 16:21:41.578367 2680 log.go:172] (0xc000b46000) Reply frame received for 5\nI0511 16:21:41.634604 2680 log.go:172] (0xc000b46000) Data frame received for 3\nI0511 16:21:41.634630 2680 log.go:172] (0xc000aee000) (3) Data frame handling\nI0511 16:21:41.634638 2680 log.go:172] (0xc000aee000) (3) Data frame sent\nI0511 16:21:41.634644 2680 log.go:172] (0xc000b46000) Data frame received for 3\nI0511 16:21:41.634648 2680 log.go:172] (0xc000aee000) (3) Data frame handling\nI0511 16:21:41.634692 2680 log.go:172] (0xc000b46000) Data frame received for 5\nI0511 16:21:41.634725 2680 log.go:172] (0xc0007e2000) (5) Data frame handling\nI0511 16:21:41.634744 2680 log.go:172] (0xc0007e2000) (5) Data frame sent\nI0511 16:21:41.634755 2680 log.go:172] (0xc000b46000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 16:21:41.634764 2680 log.go:172] (0xc0007e2000) (5) Data frame handling\nI0511 16:21:41.636046 2680 log.go:172] (0xc000b46000) Data frame received for 1\nI0511 16:21:41.636064 2680 log.go:172] (0xc00023b400) (1) Data frame handling\nI0511 16:21:41.636077 2680 log.go:172] (0xc00023b400) (1) Data frame sent\nI0511 16:21:41.636090 2680 log.go:172] (0xc000b46000) (0xc00023b400) Stream removed, broadcasting: 1\nI0511 16:21:41.636104 2680 log.go:172] (0xc000b46000) Go away received\nI0511 16:21:41.636537 2680 log.go:172] (0xc000b46000) (0xc00023b400) Stream removed, broadcasting: 1\nI0511 16:21:41.636563 2680 log.go:172] (0xc000b46000) (0xc000aee000) Stream removed, broadcasting: 3\nI0511 16:21:41.636575 2680 log.go:172] (0xc000b46000) (0xc0007e2000) Stream removed, broadcasting: 5\n" May 11 16:21:41.641: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 16:21:41.641: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 16:21:41.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6277 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 16:21:42.461: INFO: stderr: "I0511 16:21:41.778842 2700 log.go:172] (0xc000a46e70) (0xc0005bbf40) Create stream\nI0511 16:21:41.778901 2700 log.go:172] (0xc000a46e70) (0xc0005bbf40) Stream added, broadcasting: 1\nI0511 16:21:41.782071 2700 log.go:172] (0xc000a46e70) Reply frame received for 1\nI0511 16:21:41.782113 2700 log.go:172] (0xc000a46e70) (0xc000b6c0a0) Create stream\nI0511 16:21:41.782126 2700 log.go:172] (0xc000a46e70) (0xc000b6c0a0) Stream added, broadcasting: 3\nI0511 16:21:41.783107 2700 log.go:172] (0xc000a46e70) Reply frame received for 3\nI0511 16:21:41.783156 2700 log.go:172] (0xc000a46e70) (0xc000a3a000) Create stream\nI0511 16:21:41.783176 2700 log.go:172] (0xc000a46e70) (0xc000a3a000) Stream added, broadcasting: 5\nI0511 16:21:41.784050 2700 log.go:172] (0xc000a46e70) Reply frame received for 5\nI0511 16:21:41.839631 2700 log.go:172] (0xc000a46e70) Data frame received for 5\nI0511 16:21:41.839669 2700 log.go:172] (0xc000a3a000) (5) Data frame handling\nI0511 16:21:41.839701 2700 log.go:172] (0xc000a3a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 16:21:42.453056 2700 log.go:172] (0xc000a46e70) Data frame received for 3\nI0511 16:21:42.453377 2700 log.go:172] (0xc000b6c0a0) (3) Data frame handling\nI0511 16:21:42.453444 2700 log.go:172] (0xc000b6c0a0) (3) Data frame sent\nI0511 16:21:42.453467 2700 log.go:172] (0xc000a46e70) Data frame received for 5\nI0511 16:21:42.453483 2700 log.go:172] (0xc000a3a000) (5) Data frame handling\nI0511 16:21:42.453509 2700 log.go:172] (0xc000a46e70) Data frame received for 3\nI0511 16:21:42.453522 2700 log.go:172] (0xc000b6c0a0) (3) Data frame handling\nI0511 16:21:42.455186 2700 log.go:172] (0xc000a46e70) Data frame received for 1\nI0511 16:21:42.455205 2700 log.go:172] (0xc0005bbf40) (1) Data frame handling\nI0511 16:21:42.455215 2700 log.go:172] (0xc0005bbf40) (1) Data frame sent\nI0511 16:21:42.455234 2700 log.go:172] (0xc000a46e70) (0xc0005bbf40) Stream removed, broadcasting: 1\nI0511 16:21:42.455260 2700 log.go:172] (0xc000a46e70) Go away received\nI0511 16:21:42.455700 2700 log.go:172] (0xc000a46e70) (0xc0005bbf40) Stream removed, broadcasting: 1\nI0511 16:21:42.455729 2700 log.go:172] (0xc000a46e70) (0xc000b6c0a0) Stream removed, broadcasting: 3\nI0511 16:21:42.455741 2700 log.go:172] (0xc000a46e70) (0xc000a3a000) Stream removed, broadcasting: 5\n" May 11 16:21:42.461: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 16:21:42.461: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 16:21:42.461: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 16:22:03.243: INFO: Deleting all statefulset in ns statefulset-6277 May 11 16:22:03.246: INFO: Scaling statefulset ss to 0 May 11 16:22:03.251: INFO: Waiting for statefulset status.replicas updated to 0 May 11 16:22:03.253: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:22:03.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6277" for this suite. • [SLOW TEST:90.448 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":163,"skipped":2704,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:22:03.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-eee55aa2-930e-4f1a-88ba-3a9ddc3952ea STEP: Creating a pod to test consume secrets May 11 16:22:03.370: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-31448dae-faca-487d-b6f2-91eca45a7cb3" in namespace "projected-9748" to be "success or failure" May 11 16:22:03.380: INFO: Pod "pod-projected-secrets-31448dae-faca-487d-b6f2-91eca45a7cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.477947ms May 11 16:22:05.384: INFO: Pod "pod-projected-secrets-31448dae-faca-487d-b6f2-91eca45a7cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013542132s May 11 16:22:07.440: INFO: Pod "pod-projected-secrets-31448dae-faca-487d-b6f2-91eca45a7cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069602024s May 11 16:22:09.453: INFO: Pod "pod-projected-secrets-31448dae-faca-487d-b6f2-91eca45a7cb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082928862s STEP: Saw pod success May 11 16:22:09.453: INFO: Pod "pod-projected-secrets-31448dae-faca-487d-b6f2-91eca45a7cb3" satisfied condition "success or failure" May 11 16:22:09.494: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-31448dae-faca-487d-b6f2-91eca45a7cb3 container projected-secret-volume-test: STEP: delete the pod May 11 16:22:09.633: INFO: Waiting for pod pod-projected-secrets-31448dae-faca-487d-b6f2-91eca45a7cb3 to disappear May 11 16:22:09.723: INFO: Pod pod-projected-secrets-31448dae-faca-487d-b6f2-91eca45a7cb3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:22:09.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9748" for this suite. • [SLOW TEST:6.458 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:22:09.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 16:22:22.261: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 16:22:22.430: INFO: Pod pod-with-poststart-http-hook still exists May 11 16:22:24.430: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 16:22:24.433: INFO: Pod pod-with-poststart-http-hook still exists May 11 16:22:26.430: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 16:22:26.821: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:22:26.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6522" for this suite. • [SLOW TEST:17.425 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2751,"failed":0} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:22:27.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 11 16:22:35.169: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4739 pod-service-account-f47eeb73-5345-4255-be97-17fffbf6e9ad -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 11 16:22:35.380: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4739 pod-service-account-f47eeb73-5345-4255-be97-17fffbf6e9ad -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 11 16:22:35.584: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4739 pod-service-account-f47eeb73-5345-4255-be97-17fffbf6e9ad -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:22:35.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4739" for this suite. • [SLOW TEST:8.641 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":166,"skipped":2760,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:22:35.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-rvls STEP: Creating a pod to test atomic-volume-subpath May 11 16:22:36.046: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rvls" in namespace "subpath-1198" to be "success or failure" May 11 16:22:36.208: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Pending", Reason="", readiness=false. Elapsed: 162.129899ms May 11 16:22:38.213: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166798091s May 11 16:22:40.216: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Running", Reason="", readiness=true. Elapsed: 4.170236594s May 11 16:22:42.221: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Running", Reason="", readiness=true. Elapsed: 6.175007871s May 11 16:22:44.226: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Running", Reason="", readiness=true. Elapsed: 8.179810504s May 11 16:22:46.231: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Running", Reason="", readiness=true. Elapsed: 10.184419953s May 11 16:22:48.419: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Running", Reason="", readiness=true. Elapsed: 12.372998615s May 11 16:22:50.645: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Running", Reason="", readiness=true. Elapsed: 14.599212133s May 11 16:22:52.658: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Running", Reason="", readiness=true. Elapsed: 16.611293149s May 11 16:22:54.662: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Running", Reason="", readiness=true. Elapsed: 18.615373094s May 11 16:22:56.666: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Running", Reason="", readiness=true. Elapsed: 20.619653018s May 11 16:22:58.670: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Running", Reason="", readiness=true. Elapsed: 22.624073895s May 11 16:23:00.687: INFO: Pod "pod-subpath-test-configmap-rvls": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.640957932s STEP: Saw pod success May 11 16:23:00.687: INFO: Pod "pod-subpath-test-configmap-rvls" satisfied condition "success or failure" May 11 16:23:00.690: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-rvls container test-container-subpath-configmap-rvls: STEP: delete the pod May 11 16:23:00.720: INFO: Waiting for pod pod-subpath-test-configmap-rvls to disappear May 11 16:23:00.755: INFO: Pod pod-subpath-test-configmap-rvls no longer exists STEP: Deleting pod pod-subpath-test-configmap-rvls May 11 16:23:00.755: INFO: Deleting pod "pod-subpath-test-configmap-rvls" in namespace "subpath-1198" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:23:00.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1198" for this suite. • [SLOW TEST:24.964 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":167,"skipped":2765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:23:00.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:23:01.175: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49806fa4-93ff-49b9-8cf1-b6456ffe8564" in namespace "projected-694" to be "success or failure" May 11 16:23:01.216: INFO: Pod "downwardapi-volume-49806fa4-93ff-49b9-8cf1-b6456ffe8564": Phase="Pending", Reason="", readiness=false. Elapsed: 40.962066ms May 11 16:23:03.220: INFO: Pod "downwardapi-volume-49806fa4-93ff-49b9-8cf1-b6456ffe8564": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045266251s May 11 16:23:05.224: INFO: Pod "downwardapi-volume-49806fa4-93ff-49b9-8cf1-b6456ffe8564": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048603907s May 11 16:23:07.227: INFO: Pod "downwardapi-volume-49806fa4-93ff-49b9-8cf1-b6456ffe8564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051369119s STEP: Saw pod success May 11 16:23:07.227: INFO: Pod "downwardapi-volume-49806fa4-93ff-49b9-8cf1-b6456ffe8564" satisfied condition "success or failure" May 11 16:23:07.228: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-49806fa4-93ff-49b9-8cf1-b6456ffe8564 container client-container: STEP: delete the pod May 11 16:23:07.269: INFO: Waiting for pod downwardapi-volume-49806fa4-93ff-49b9-8cf1-b6456ffe8564 to disappear May 11 16:23:07.286: INFO: Pod downwardapi-volume-49806fa4-93ff-49b9-8cf1-b6456ffe8564 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:23:07.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-694" for this suite. • [SLOW TEST:6.525 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2798,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:23:07.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 16:23:08.299: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 16:23:10.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810988, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810988, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810988, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810988, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:23:12.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810988, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810988, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810988, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724810988, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 16:23:15.451: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:23:26.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7854" for this suite. STEP: Destroying namespace "webhook-7854-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.153 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":169,"skipped":2811,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:23:27.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0511 16:24:08.204867 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 16:24:08.204: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:24:08.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4082" for this suite. • [SLOW TEST:40.766 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":170,"skipped":2813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:24:08.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:24:08.342: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 11 16:24:13.361: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 16:24:13.361: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 11 16:24:16.279: INFO: Creating deployment "test-rollover-deployment" May 11 16:24:16.313: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 11 16:24:18.439: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 11 16:24:18.659: INFO: Ensure that both replica sets have 1 created replica May 11 16:24:18.665: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 11 16:24:18.669: INFO: Updating deployment test-rollover-deployment May 11 16:24:18.669: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 11 16:24:22.856: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 11 16:24:22.863: INFO: Make sure deployment "test-rollover-deployment" is complete May 11 16:24:22.891: INFO: all replica sets need to contain the pod-template-hash label May 11 16:24:22.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811061, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:24:24.978: INFO: all replica sets need to contain the pod-template-hash label May 11 16:24:24.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:24:26.913: INFO: all replica sets need to contain the pod-template-hash label May 11 16:24:26.914: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:24:28.900: INFO: all replica sets need to contain the pod-template-hash label May 11 16:24:28.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:24:30.900: INFO: all replica sets need to contain the pod-template-hash label May 11 16:24:30.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:24:32.897: INFO: all replica sets need to contain the pod-template-hash label May 11 16:24:32.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811064, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811056, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:24:34.898: INFO: May 11 16:24:34.898: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 11 16:24:34.905: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5220 /apis/apps/v1/namespaces/deployment-5220/deployments/test-rollover-deployment 55bfa241-7ef5-43b5-a400-f35a143ab4b2 15281492 2 2020-05-11 16:24:16 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00537e5e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 16:24:16 +0000 UTC,LastTransitionTime:2020-05-11 16:24:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-11 16:24:34 +0000 UTC,LastTransitionTime:2020-05-11 16:24:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 11 16:24:34.908: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-5220 /apis/apps/v1/namespaces/deployment-5220/replicasets/test-rollover-deployment-574d6dfbff 332105dd-0d4f-4f27-a826-9fba7787932d 15281481 2 2020-05-11 16:24:18 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 55bfa241-7ef5-43b5-a400-f35a143ab4b2 0xc00537eaa7 0xc00537eaa8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00537eb18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 16:24:34.908: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 11 16:24:34.908: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5220 /apis/apps/v1/namespaces/deployment-5220/replicasets/test-rollover-controller 42f81a5e-8369-4768-b71b-929c73b544a7 15281490 2 2020-05-11 16:24:08 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 55bfa241-7ef5-43b5-a400-f35a143ab4b2 0xc00537e9c7 0xc00537e9c8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00537ea38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 16:24:34.908: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5220 /apis/apps/v1/namespaces/deployment-5220/replicasets/test-rollover-deployment-f6c94f66c 37981168-d08e-4bea-a996-e9c6aaa526a7 15281437 2 2020-05-11 16:24:16 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 55bfa241-7ef5-43b5-a400-f35a143ab4b2 0xc00537eb80 0xc00537eb81}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00537ebf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 16:24:34.910: INFO: Pod "test-rollover-deployment-574d6dfbff-zd4kc" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-zd4kc test-rollover-deployment-574d6dfbff- deployment-5220 /api/v1/namespaces/deployment-5220/pods/test-rollover-deployment-574d6dfbff-zd4kc 277e5b79-dd07-42bf-81fa-69cda9f46e9f 15281449 0 2020-05-11 16:24:20 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 332105dd-0d4f-4f27-a826-9fba7787932d 0xc005264187 0xc005264188}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2lg8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2lg8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2lg8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:24:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:24:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:24:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.53,StartTime:2020-05-11 16:24:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:24:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://175ed602901521ae605f26b11f2b98170defb710888f830859b3daf39dda6fb8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.53,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:24:34.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5220" for this suite. • [SLOW TEST:26.705 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":171,"skipped":2838,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:24:34.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0511 16:24:45.904572 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 16:24:45.904: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:24:45.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9752" for this suite. • [SLOW TEST:10.996 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":172,"skipped":2843,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:24:45.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:24:45.991: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:24:52.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-312" for this suite. • [SLOW TEST:6.130 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":173,"skipped":2852,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:24:52.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 11 16:24:52.113: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:25:06.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6093" for this suite. • [SLOW TEST:14.168 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":174,"skipped":2873,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:25:06.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 16:25:06.890: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 16:25:09.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811106, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811106, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:25:11.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811106, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811106, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 16:25:16.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 11 16:25:20.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-8761 to-be-attached-pod -i -c=container1' May 11 16:25:20.430: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:25:20.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8761" for this suite. STEP: Destroying namespace "webhook-8761-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.342 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":175,"skipped":2878,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:25:20.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 16:25:21.367: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 16:25:23.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811121, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811121, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811121, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811121, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 16:25:26.424: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 11 16:25:26.443: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:25:26.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-516" for this suite. STEP: Destroying namespace "webhook-516-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.440 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":176,"skipped":2886,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:25:26.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 11 16:25:27.148: INFO: Waiting up to 5m0s for pod "var-expansion-6d1f4678-3807-408b-85fd-5ca7fc9ed944" in namespace "var-expansion-1335" to be "success or failure" May 11 16:25:27.249: INFO: Pod "var-expansion-6d1f4678-3807-408b-85fd-5ca7fc9ed944": Phase="Pending", Reason="", readiness=false. Elapsed: 100.918057ms May 11 16:25:29.324: INFO: Pod "var-expansion-6d1f4678-3807-408b-85fd-5ca7fc9ed944": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175732995s May 11 16:25:31.328: INFO: Pod "var-expansion-6d1f4678-3807-408b-85fd-5ca7fc9ed944": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17994837s May 11 16:25:33.332: INFO: Pod "var-expansion-6d1f4678-3807-408b-85fd-5ca7fc9ed944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183650874s STEP: Saw pod success May 11 16:25:33.332: INFO: Pod "var-expansion-6d1f4678-3807-408b-85fd-5ca7fc9ed944" satisfied condition "success or failure" May 11 16:25:33.334: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-6d1f4678-3807-408b-85fd-5ca7fc9ed944 container dapi-container: STEP: delete the pod May 11 16:25:33.515: INFO: Waiting for pod var-expansion-6d1f4678-3807-408b-85fd-5ca7fc9ed944 to disappear May 11 16:25:33.525: INFO: Pod var-expansion-6d1f4678-3807-408b-85fd-5ca7fc9ed944 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:25:33.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1335" for this suite. • [SLOW TEST:6.538 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2888,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:25:33.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 16:25:35.936: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 16:25:37.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811135, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811135, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811136, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811134, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:25:39.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811135, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811135, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811136, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724811134, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 16:25:43.015: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:25:43.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3769" for this suite. STEP: Destroying namespace "webhook-3769-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.470 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":178,"skipped":2888,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:25:44.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 11 16:25:46.094: INFO: Waiting up to 5m0s for pod "downward-api-d55be6a2-c720-476c-8922-b744249cbd1c" in namespace "downward-api-2511" to be "success or failure" May 11 16:25:46.125: INFO: Pod "downward-api-d55be6a2-c720-476c-8922-b744249cbd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.33622ms May 11 16:25:48.151: INFO: Pod "downward-api-d55be6a2-c720-476c-8922-b744249cbd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05711024s May 11 16:25:50.733: INFO: Pod "downward-api-d55be6a2-c720-476c-8922-b744249cbd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.639644216s May 11 16:25:53.014: INFO: Pod "downward-api-d55be6a2-c720-476c-8922-b744249cbd1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.919978804s May 11 16:25:55.018: INFO: Pod "downward-api-d55be6a2-c720-476c-8922-b744249cbd1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.924625412s STEP: Saw pod success May 11 16:25:55.018: INFO: Pod "downward-api-d55be6a2-c720-476c-8922-b744249cbd1c" satisfied condition "success or failure" May 11 16:25:55.022: INFO: Trying to get logs from node jerma-worker2 pod downward-api-d55be6a2-c720-476c-8922-b744249cbd1c container dapi-container: STEP: delete the pod May 11 16:25:55.152: INFO: Waiting for pod downward-api-d55be6a2-c720-476c-8922-b744249cbd1c to disappear May 11 16:25:55.384: INFO: Pod downward-api-d55be6a2-c720-476c-8922-b744249cbd1c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:25:55.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2511" for this suite. • [SLOW TEST:11.398 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2900,"failed":0} [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:25:55.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:25:56.326: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a15ec0d6-8489-41b9-afef-46fe45bad2a4" in namespace "security-context-test-9266" to be "success or failure" May 11 16:25:56.708: INFO: Pod "busybox-privileged-false-a15ec0d6-8489-41b9-afef-46fe45bad2a4": Phase="Pending", Reason="", readiness=false. Elapsed: 381.466128ms May 11 16:25:58.720: INFO: Pod "busybox-privileged-false-a15ec0d6-8489-41b9-afef-46fe45bad2a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394377166s May 11 16:26:00.750: INFO: Pod "busybox-privileged-false-a15ec0d6-8489-41b9-afef-46fe45bad2a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.423675362s May 11 16:26:02.753: INFO: Pod "busybox-privileged-false-a15ec0d6-8489-41b9-afef-46fe45bad2a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.427027447s May 11 16:26:02.753: INFO: Pod "busybox-privileged-false-a15ec0d6-8489-41b9-afef-46fe45bad2a4" satisfied condition "success or failure" May 11 16:26:02.759: INFO: Got logs for pod "busybox-privileged-false-a15ec0d6-8489-41b9-afef-46fe45bad2a4": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:26:02.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9266" for this suite. • [SLOW TEST:7.365 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2900,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:26:02.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-b6d0f4fc-68a2-421f-9621-ac7e3c850f26 STEP: Creating a pod to test consume secrets May 11 16:26:02.892: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ddf588b-3a7e-4625-8f7b-211cdba7e9a5" in namespace "projected-6477" to be "success or failure" May 11 16:26:02.926: INFO: Pod "pod-projected-secrets-9ddf588b-3a7e-4625-8f7b-211cdba7e9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.406979ms May 11 16:26:07.864: INFO: Pod "pod-projected-secrets-9ddf588b-3a7e-4625-8f7b-211cdba7e9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.971893219s May 11 16:26:09.972: INFO: Pod "pod-projected-secrets-9ddf588b-3a7e-4625-8f7b-211cdba7e9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.079386041s May 11 16:26:11.976: INFO: Pod "pod-projected-secrets-9ddf588b-3a7e-4625-8f7b-211cdba7e9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.083717618s May 11 16:26:13.981: INFO: Pod "pod-projected-secrets-9ddf588b-3a7e-4625-8f7b-211cdba7e9a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.088263576s STEP: Saw pod success May 11 16:26:13.981: INFO: Pod "pod-projected-secrets-9ddf588b-3a7e-4625-8f7b-211cdba7e9a5" satisfied condition "success or failure" May 11 16:26:13.984: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-9ddf588b-3a7e-4625-8f7b-211cdba7e9a5 container secret-volume-test: STEP: delete the pod May 11 16:26:14.122: INFO: Waiting for pod pod-projected-secrets-9ddf588b-3a7e-4625-8f7b-211cdba7e9a5 to disappear May 11 16:26:14.132: INFO: Pod pod-projected-secrets-9ddf588b-3a7e-4625-8f7b-211cdba7e9a5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:26:14.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6477" for this suite. • [SLOW TEST:11.373 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2912,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:26:14.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7c6f748b-899a-4f87-aeff-a43559e7dcd4 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7c6f748b-899a-4f87-aeff-a43559e7dcd4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:26:22.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9807" for this suite. • [SLOW TEST:8.331 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2919,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:26:22.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-rscr STEP: Creating a pod to test atomic-volume-subpath May 11 16:26:22.540: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rscr" in namespace "subpath-2287" to be "success or failure" May 11 16:26:22.545: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.769856ms May 11 16:26:24.548: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008249132s May 11 16:26:26.552: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 4.012434714s May 11 16:26:28.585: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 6.044584035s May 11 16:26:30.589: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 8.049019713s May 11 16:26:32.593: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 10.052991649s May 11 16:26:34.597: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 12.057392309s May 11 16:26:36.653: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 14.11318564s May 11 16:26:38.816: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 16.276308735s May 11 16:26:40.984: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 18.443958613s May 11 16:26:42.988: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 20.447926395s May 11 16:26:44.991: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 22.451400358s May 11 16:26:47.026: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Running", Reason="", readiness=true. Elapsed: 24.486240826s May 11 16:26:49.144: INFO: Pod "pod-subpath-test-secret-rscr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.604243435s STEP: Saw pod success May 11 16:26:49.144: INFO: Pod "pod-subpath-test-secret-rscr" satisfied condition "success or failure" May 11 16:26:49.721: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-rscr container test-container-subpath-secret-rscr: STEP: delete the pod May 11 16:26:50.519: INFO: Waiting for pod pod-subpath-test-secret-rscr to disappear May 11 16:26:52.358: INFO: Pod pod-subpath-test-secret-rscr no longer exists STEP: Deleting pod pod-subpath-test-secret-rscr May 11 16:26:52.358: INFO: Deleting pod "pod-subpath-test-secret-rscr" in namespace "subpath-2287" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:26:52.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2287" for this suite. • [SLOW TEST:31.172 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":183,"skipped":2928,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:26:53.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:26:54.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1123' May 11 16:27:05.146: INFO: stderr: "" May 11 16:27:05.146: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 11 16:27:05.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1123' May 11 16:27:06.255: INFO: stderr: "" May 11 16:27:06.255: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 16:27:07.625: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:07.625: INFO: Found 0 / 1 May 11 16:27:08.290: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:08.290: INFO: Found 0 / 1 May 11 16:27:09.422: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:09.422: INFO: Found 0 / 1 May 11 16:27:10.258: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:10.258: INFO: Found 0 / 1 May 11 16:27:11.260: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:11.260: INFO: Found 0 / 1 May 11 16:27:12.323: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:12.323: INFO: Found 0 / 1 May 11 16:27:13.637: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:13.638: INFO: Found 0 / 1 May 11 16:27:14.259: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:14.259: INFO: Found 0 / 1 May 11 16:27:15.259: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:15.259: INFO: Found 0 / 1 May 11 16:27:16.344: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:16.344: INFO: Found 0 / 1 May 11 16:27:17.523: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:17.523: INFO: Found 1 / 1 May 11 16:27:17.523: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 16:27:17.526: INFO: Selector matched 1 pods for map[app:agnhost] May 11 16:27:17.526: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 16:27:17.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-qg2xb --namespace=kubectl-1123' May 11 16:27:17.635: INFO: stderr: "" May 11 16:27:17.635: INFO: stdout: "Name: agnhost-master-qg2xb\nNamespace: kubectl-1123\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Mon, 11 May 2020 16:27:05 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.59\nIPs:\n IP: 10.244.1.59\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://5f04f3dc0b394fa715695aa37a6723b56c7e622296345cf603149a42bb66e5d1\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 11 May 2020 16:27:15 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-lgcbp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-lgcbp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-lgcbp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 12s default-scheduler Successfully assigned kubectl-1123/agnhost-master-qg2xb to jerma-worker\n Normal Pulled 6s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" May 11 16:27:17.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1123' May 11 16:27:18.840: INFO: stderr: "" May 11 16:27:18.840: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1123\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replication-controller Created pod: agnhost-master-qg2xb\n" May 11 16:27:18.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1123' May 11 16:27:19.861: INFO: stderr: "" May 11 16:27:19.861: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1123\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.105.0.205\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.59:6379\nSession Affinity: None\nEvents: \n" May 11 16:27:19.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 11 16:27:19.992: INFO: stderr: "" May 11 16:27:19.992: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Mon, 11 May 2020 16:27:10 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 11 May 2020 16:23:47 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 11 May 2020 16:23:47 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 11 May 2020 16:23:47 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 11 May 2020 16:23:47 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 56d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 56d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 56d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 11 16:27:19.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1123' May 11 16:27:20.084: INFO: stderr: "" May 11 16:27:20.084: INFO: stdout: "Name: kubectl-1123\nLabels: e2e-framework=kubectl\n e2e-run=f9159da4-429b-4f56-aa03-5ee08cb43f79\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:27:20.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1123" for this suite. • [SLOW TEST:26.448 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":184,"skipped":2930,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:27:20.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 11 16:27:33.379: INFO: &Pod{ObjectMeta:{send-events-9c2506e2-263f-4b28-b8bf-a4e3c4cbbae6 events-6562 /api/v1/namespaces/events-6562/pods/send-events-9c2506e2-263f-4b28-b8bf-a4e3c4cbbae6 ac49e42d-5158-4e32-a7ad-7118496783e4 15282519 0 2020-05-11 16:27:22 +0000 UTC map[name:foo time:34158459] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mqqrr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mqqrr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mqqrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:27:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:27:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 16:27:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.240,StartTime:2020-05-11 16:27:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 16:27:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://61535fa35d5868657a99adf8fc8dd3004ec7c77812ee81b2e8da16ae5d11b251,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 11 16:27:35.530: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 11 16:27:37.553: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:27:37.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6562" for this suite. • [SLOW TEST:18.118 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":185,"skipped":2932,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:27:38.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c5bae063-b361-433d-b2a3-cc0fd838db9f STEP: Creating a pod to test consume configMaps May 11 16:27:38.958: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8" in namespace "configmap-4414" to be "success or failure" May 11 16:27:39.296: INFO: Pod "pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8": Phase="Pending", Reason="", readiness=false. Elapsed: 338.118113ms May 11 16:27:41.539: INFO: Pod "pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.580993068s May 11 16:27:43.710: INFO: Pod "pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.751422157s May 11 16:27:46.668: INFO: Pod "pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.71005286s May 11 16:27:48.671: INFO: Pod "pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.71316085s May 11 16:27:50.847: INFO: Pod "pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8": Phase="Running", Reason="", readiness=true. Elapsed: 11.889168205s May 11 16:27:52.991: INFO: Pod "pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.03271587s STEP: Saw pod success May 11 16:27:52.991: INFO: Pod "pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8" satisfied condition "success or failure" May 11 16:27:52.994: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8 container configmap-volume-test: STEP: delete the pod May 11 16:27:53.137: INFO: Waiting for pod pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8 to disappear May 11 16:27:53.146: INFO: Pod pod-configmaps-c1a9072c-0a4e-416b-9c7e-68bf7dcde8e8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:27:53.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4414" for this suite. • [SLOW TEST:14.944 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":2945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:27:53.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 11 16:27:53.314: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:28:10.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1682" for this suite. • [SLOW TEST:17.459 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":187,"skipped":2968,"failed":0} [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:28:10.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1697.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1697.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1697.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1697.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1697.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1697.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 16:28:23.337: INFO: DNS probes using dns-1697/dns-test-8cb9c1cd-bb39-46fc-a463-e1b63d52127c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:28:23.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1697" for this suite. • [SLOW TEST:13.002 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":188,"skipped":2968,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:28:23.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 16:28:24.186: INFO: Waiting up to 5m0s for pod "pod-e48b51fc-fef0-461e-a963-31dc2e6591fc" in namespace "emptydir-2698" to be "success or failure" May 11 16:28:24.255: INFO: Pod "pod-e48b51fc-fef0-461e-a963-31dc2e6591fc": Phase="Pending", Reason="", readiness=false. Elapsed: 69.011696ms May 11 16:28:26.259: INFO: Pod "pod-e48b51fc-fef0-461e-a963-31dc2e6591fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072927523s May 11 16:28:28.435: INFO: Pod "pod-e48b51fc-fef0-461e-a963-31dc2e6591fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24900574s May 11 16:28:31.255: INFO: Pod "pod-e48b51fc-fef0-461e-a963-31dc2e6591fc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.069453812s May 11 16:28:33.368: INFO: Pod "pod-e48b51fc-fef0-461e-a963-31dc2e6591fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.182194417s STEP: Saw pod success May 11 16:28:33.368: INFO: Pod "pod-e48b51fc-fef0-461e-a963-31dc2e6591fc" satisfied condition "success or failure" May 11 16:28:33.646: INFO: Trying to get logs from node jerma-worker pod pod-e48b51fc-fef0-461e-a963-31dc2e6591fc container test-container: STEP: delete the pod May 11 16:28:34.435: INFO: Waiting for pod pod-e48b51fc-fef0-461e-a963-31dc2e6591fc to disappear May 11 16:28:34.464: INFO: Pod pod-e48b51fc-fef0-461e-a963-31dc2e6591fc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:28:34.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2698" for this suite. • [SLOW TEST:10.861 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3037,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:28:34.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-809e35a4-bd89-4272-9389-d8240ad10f74 in namespace container-probe-3991 May 11 16:28:39.181: INFO: Started pod liveness-809e35a4-bd89-4272-9389-d8240ad10f74 in namespace container-probe-3991 STEP: checking the pod's current state and verifying that restartCount is present May 11 16:28:39.183: INFO: Initial restart count of pod liveness-809e35a4-bd89-4272-9389-d8240ad10f74 is 0 May 11 16:28:59.720: INFO: Restart count of pod container-probe-3991/liveness-809e35a4-bd89-4272-9389-d8240ad10f74 is now 1 (20.536980038s elapsed) May 11 16:29:22.585: INFO: Restart count of pod container-probe-3991/liveness-809e35a4-bd89-4272-9389-d8240ad10f74 is now 2 (43.402073429s elapsed) May 11 16:29:38.919: INFO: Restart count of pod container-probe-3991/liveness-809e35a4-bd89-4272-9389-d8240ad10f74 is now 3 (59.735909478s elapsed) May 11 16:29:59.354: INFO: Restart count of pod container-probe-3991/liveness-809e35a4-bd89-4272-9389-d8240ad10f74 is now 4 (1m20.170999588s elapsed) May 11 16:30:58.848: INFO: Restart count of pod container-probe-3991/liveness-809e35a4-bd89-4272-9389-d8240ad10f74 is now 5 (2m19.665218056s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:30:58.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3991" for this suite. • [SLOW TEST:144.472 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3044,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:30:58.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9768 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9768;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9768 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9768;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9768.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9768.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9768.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9768.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9768.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9768.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9768.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9768.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9768.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9768.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9768.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.186.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.186.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.186.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.186.207_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9768 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9768;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9768 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9768;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9768.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9768.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9768.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9768.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9768.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9768.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9768.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9768.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9768.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9768.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9768.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9768.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.186.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.186.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.186.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.186.207_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 16:31:08.312: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.315: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.318: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.320: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.322: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.324: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.327: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.329: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.357: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.359: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.362: INFO: Unable to read jessie_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.367: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.369: INFO: Unable to read jessie_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.371: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.373: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.375: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:08.392: INFO: Lookups using dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9768 wheezy_tcp@dns-test-service.dns-9768 wheezy_udp@dns-test-service.dns-9768.svc wheezy_tcp@dns-test-service.dns-9768.svc wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9768 jessie_tcp@dns-test-service.dns-9768 jessie_udp@dns-test-service.dns-9768.svc jessie_tcp@dns-test-service.dns-9768.svc jessie_udp@_http._tcp.dns-test-service.dns-9768.svc jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc] May 11 16:31:13.498: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.501: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.504: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.508: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.511: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.514: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.517: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.519: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.626: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.629: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.632: INFO: Unable to read jessie_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.635: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.638: INFO: Unable to read jessie_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.640: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.643: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.646: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:13.666: INFO: Lookups using dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9768 wheezy_tcp@dns-test-service.dns-9768 wheezy_udp@dns-test-service.dns-9768.svc wheezy_tcp@dns-test-service.dns-9768.svc wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9768 jessie_tcp@dns-test-service.dns-9768 jessie_udp@dns-test-service.dns-9768.svc jessie_tcp@dns-test-service.dns-9768.svc jessie_udp@_http._tcp.dns-test-service.dns-9768.svc jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc] May 11 16:31:18.398: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.401: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.404: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.408: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.411: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.414: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.416: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.419: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.617: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.935: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.940: INFO: Unable to read jessie_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.942: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.945: INFO: Unable to read jessie_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.947: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.950: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.952: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:18.968: INFO: Lookups using dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9768 wheezy_tcp@dns-test-service.dns-9768 wheezy_udp@dns-test-service.dns-9768.svc wheezy_tcp@dns-test-service.dns-9768.svc wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9768 jessie_tcp@dns-test-service.dns-9768 jessie_udp@dns-test-service.dns-9768.svc jessie_tcp@dns-test-service.dns-9768.svc jessie_udp@_http._tcp.dns-test-service.dns-9768.svc jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc] May 11 16:31:23.397: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.400: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.403: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.407: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.409: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.412: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.415: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.418: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.435: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.437: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.440: INFO: Unable to read jessie_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.443: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.446: INFO: Unable to read jessie_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.449: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.452: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.454: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:23.469: INFO: Lookups using dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9768 wheezy_tcp@dns-test-service.dns-9768 wheezy_udp@dns-test-service.dns-9768.svc wheezy_tcp@dns-test-service.dns-9768.svc wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9768 jessie_tcp@dns-test-service.dns-9768 jessie_udp@dns-test-service.dns-9768.svc jessie_tcp@dns-test-service.dns-9768.svc jessie_udp@_http._tcp.dns-test-service.dns-9768.svc jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc] May 11 16:31:28.398: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.401: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.405: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.409: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.412: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.415: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.418: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.421: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.446: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.448: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.450: INFO: Unable to read jessie_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.453: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.455: INFO: Unable to read jessie_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.458: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.460: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.462: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:28.473: INFO: Lookups using dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9768 wheezy_tcp@dns-test-service.dns-9768 wheezy_udp@dns-test-service.dns-9768.svc wheezy_tcp@dns-test-service.dns-9768.svc wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9768 jessie_tcp@dns-test-service.dns-9768 jessie_udp@dns-test-service.dns-9768.svc jessie_tcp@dns-test-service.dns-9768.svc jessie_udp@_http._tcp.dns-test-service.dns-9768.svc jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc] May 11 16:31:33.398: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.401: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.404: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.407: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.410: INFO: Unable to read wheezy_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.412: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.415: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.417: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.436: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.439: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.441: INFO: Unable to read jessie_udp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.444: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768 from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.447: INFO: Unable to read jessie_udp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.450: INFO: Unable to read jessie_tcp@dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.453: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.455: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc from pod dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9: the server could not find the requested resource (get pods dns-test-2f693a24-9d01-4873-8fef-81670548c8c9) May 11 16:31:33.471: INFO: Lookups using dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9768 wheezy_tcp@dns-test-service.dns-9768 wheezy_udp@dns-test-service.dns-9768.svc wheezy_tcp@dns-test-service.dns-9768.svc wheezy_udp@_http._tcp.dns-test-service.dns-9768.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9768.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9768 jessie_tcp@dns-test-service.dns-9768 jessie_udp@dns-test-service.dns-9768.svc jessie_tcp@dns-test-service.dns-9768.svc jessie_udp@_http._tcp.dns-test-service.dns-9768.svc jessie_tcp@_http._tcp.dns-test-service.dns-9768.svc] May 11 16:31:38.478: INFO: DNS probes using dns-9768/dns-test-2f693a24-9d01-4873-8fef-81670548c8c9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:31:39.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9768" for this suite. • [SLOW TEST:40.461 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":191,"skipped":3052,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:31:39.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 11 16:31:45.593: INFO: Pod pod-hostip-c1d168a8-fcd3-49d0-81c8-94fc347f6989 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:31:45.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1141" for this suite. • [SLOW TEST:6.190 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3105,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:31:45.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 11 16:31:45.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 11 16:31:46.006: INFO: stderr: "" May 11 16:31:46.006: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:31:46.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7157" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":193,"skipped":3115,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:31:46.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 16:31:46.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2427' May 11 16:31:46.234: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 16:31:46.234: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 11 16:31:46.272: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 11 16:31:46.324: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 11 16:31:46.339: INFO: scanned /root for discovery docs: May 11 16:31:46.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2427' May 11 16:32:06.799: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 11 16:32:06.799: INFO: stdout: "Created e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062\nScaling up e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 11 16:32:06.799: INFO: stdout: "Created e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062\nScaling up e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 11 16:32:06.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2427' May 11 16:32:07.210: INFO: stderr: "" May 11 16:32:07.210: INFO: stdout: "e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062-czb6b " May 11 16:32:07.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062-czb6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2427' May 11 16:32:07.390: INFO: stderr: "" May 11 16:32:07.390: INFO: stdout: "true" May 11 16:32:07.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062-czb6b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2427' May 11 16:32:07.473: INFO: stderr: "" May 11 16:32:07.473: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 11 16:32:07.474: INFO: e2e-test-httpd-rc-ab4378c4258cade24e2e88c75341c062-czb6b is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 11 16:32:07.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2427' May 11 16:32:07.664: INFO: stderr: "" May 11 16:32:07.664: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:32:07.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2427" for this suite. • [SLOW TEST:22.048 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":194,"skipped":3116,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:32:08.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 16:32:08.870: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:08.877: INFO: Number of nodes with available pods: 0 May 11 16:32:08.877: INFO: Node jerma-worker is running more than one daemon pod May 11 16:32:09.907: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:09.911: INFO: Number of nodes with available pods: 0 May 11 16:32:09.911: INFO: Node jerma-worker is running more than one daemon pod May 11 16:32:10.889: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:10.971: INFO: Number of nodes with available pods: 0 May 11 16:32:10.971: INFO: Node jerma-worker is running more than one daemon pod May 11 16:32:11.882: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:11.885: INFO: Number of nodes with available pods: 0 May 11 16:32:11.885: INFO: Node jerma-worker is running more than one daemon pod May 11 16:32:13.847: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:13.937: INFO: Number of nodes with available pods: 0 May 11 16:32:13.937: INFO: Node jerma-worker is running more than one daemon pod May 11 16:32:14.937: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:14.973: INFO: Number of nodes with available pods: 2 May 11 16:32:14.973: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 11 16:32:15.072: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:15.076: INFO: Number of nodes with available pods: 1 May 11 16:32:15.076: INFO: Node jerma-worker2 is running more than one daemon pod May 11 16:32:16.080: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:16.083: INFO: Number of nodes with available pods: 1 May 11 16:32:16.083: INFO: Node jerma-worker2 is running more than one daemon pod May 11 16:32:17.081: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:17.083: INFO: Number of nodes with available pods: 1 May 11 16:32:17.083: INFO: Node jerma-worker2 is running more than one daemon pod May 11 16:32:18.178: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:18.181: INFO: Number of nodes with available pods: 1 May 11 16:32:18.181: INFO: Node jerma-worker2 is running more than one daemon pod May 11 16:32:19.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:19.085: INFO: Number of nodes with available pods: 1 May 11 16:32:19.085: INFO: Node jerma-worker2 is running more than one daemon pod May 11 16:32:20.119: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:20.123: INFO: Number of nodes with available pods: 1 May 11 16:32:20.123: INFO: Node jerma-worker2 is running more than one daemon pod May 11 16:32:21.149: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 16:32:21.193: INFO: Number of nodes with available pods: 2 May 11 16:32:21.193: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3254, will wait for the garbage collector to delete the pods May 11 16:32:21.256: INFO: Deleting DaemonSet.extensions daemon-set took: 6.005645ms May 11 16:32:21.357: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.427566ms May 11 16:32:29.562: INFO: Number of nodes with available pods: 0 May 11 16:32:29.562: INFO: Number of running nodes: 0, number of available pods: 0 May 11 16:32:29.564: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3254/daemonsets","resourceVersion":"15283739"},"items":null} May 11 16:32:29.566: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3254/pods","resourceVersion":"15283739"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:32:29.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3254" for this suite. • [SLOW TEST:21.519 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":195,"skipped":3127,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:32:29.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 11 16:32:34.467: INFO: Successfully updated pod "annotationupdate55fda961-48b4-42ca-98fc-32b2be68dfa6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:32:38.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8907" for this suite. • [SLOW TEST:8.965 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3137,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:32:38.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 11 16:32:38.596: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:32:46.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4062" for this suite. • [SLOW TEST:7.756 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":197,"skipped":3138,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:32:46.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-79d81b91-8e22-4fe1-8ea7-a86f1d4d8bf2 STEP: Creating a pod to test consume secrets May 11 16:32:46.516: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-84799207-1fc1-4108-93ff-b3c244df0993" in namespace "projected-3861" to be "success or failure" May 11 16:32:46.673: INFO: Pod "pod-projected-secrets-84799207-1fc1-4108-93ff-b3c244df0993": Phase="Pending", Reason="", readiness=false. Elapsed: 156.608336ms May 11 16:32:48.990: INFO: Pod "pod-projected-secrets-84799207-1fc1-4108-93ff-b3c244df0993": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473831951s May 11 16:32:50.992: INFO: Pod "pod-projected-secrets-84799207-1fc1-4108-93ff-b3c244df0993": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475935411s May 11 16:32:53.643: INFO: Pod "pod-projected-secrets-84799207-1fc1-4108-93ff-b3c244df0993": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.126276275s STEP: Saw pod success May 11 16:32:53.643: INFO: Pod "pod-projected-secrets-84799207-1fc1-4108-93ff-b3c244df0993" satisfied condition "success or failure" May 11 16:32:53.683: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-84799207-1fc1-4108-93ff-b3c244df0993 container projected-secret-volume-test: STEP: delete the pod May 11 16:32:54.569: INFO: Waiting for pod pod-projected-secrets-84799207-1fc1-4108-93ff-b3c244df0993 to disappear May 11 16:32:54.870: INFO: Pod pod-projected-secrets-84799207-1fc1-4108-93ff-b3c244df0993 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:32:54.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3861" for this suite. • [SLOW TEST:8.913 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3155,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:32:55.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:32:57.011: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 15.895879ms) May 11 16:32:57.014: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.883529ms) May 11 16:32:57.064: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 49.298908ms) May 11 16:32:57.350: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 286.023356ms) May 11 16:32:57.369: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 19.382714ms) May 11 16:32:57.372: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.897933ms) May 11 16:32:57.375: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.877573ms) May 11 16:32:57.378: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.788877ms) May 11 16:32:57.381: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.639344ms) May 11 16:32:57.383: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.603549ms) May 11 16:32:57.386: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.521924ms) May 11 16:32:57.388: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.651685ms) May 11 16:32:57.391: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.255254ms) May 11 16:32:57.393: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.651581ms) May 11 16:32:57.399: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.987127ms) May 11 16:32:57.402: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.117219ms) May 11 16:32:57.405: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.425124ms) May 11 16:32:57.407: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.053925ms) May 11 16:32:57.409: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.112225ms) May 11 16:32:57.572: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 162.93165ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:32:57.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6164" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":199,"skipped":3158,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:32:57.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:32:57.712: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:32:58.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5098" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":200,"skipped":3162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:32:58.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-e455b2c7-e72e-4605-9afc-f757b27d6cec [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:32:59.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6953" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":201,"skipped":3204,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:32:59.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-c2dg STEP: Creating a pod to test atomic-volume-subpath May 11 16:33:00.669: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-c2dg" in namespace "subpath-4525" to be "success or failure" May 11 16:33:00.723: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 53.32848ms May 11 16:33:02.734: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065204748s May 11 16:33:04.747: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 4.077340389s May 11 16:33:06.864: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 6.194827531s May 11 16:33:08.867: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 8.198036288s May 11 16:33:10.882: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 10.212878815s May 11 16:33:12.918: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 12.249207325s May 11 16:33:14.921: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 14.252118864s May 11 16:33:16.925: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 16.25526864s May 11 16:33:18.930: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 18.261018914s May 11 16:33:20.996: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 20.326254437s May 11 16:33:23.049: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 22.379768012s May 11 16:33:25.123: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Running", Reason="", readiness=true. Elapsed: 24.45324807s May 11 16:33:27.127: INFO: Pod "pod-subpath-test-downwardapi-c2dg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.457717247s STEP: Saw pod success May 11 16:33:27.127: INFO: Pod "pod-subpath-test-downwardapi-c2dg" satisfied condition "success or failure" May 11 16:33:27.130: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-c2dg container test-container-subpath-downwardapi-c2dg: STEP: delete the pod May 11 16:33:27.505: INFO: Waiting for pod pod-subpath-test-downwardapi-c2dg to disappear May 11 16:33:27.557: INFO: Pod pod-subpath-test-downwardapi-c2dg no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-c2dg May 11 16:33:27.557: INFO: Deleting pod "pod-subpath-test-downwardapi-c2dg" in namespace "subpath-4525" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:33:27.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4525" for this suite. • [SLOW TEST:28.051 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":202,"skipped":3225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:33:27.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0511 16:33:29.635625 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 16:33:29.635: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:33:29.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2961" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":203,"skipped":3248,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:33:29.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 11 16:33:34.466: INFO: Successfully updated pod "annotationupdate676096cd-b225-4496-9ded-27a59460bcd5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:33:36.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8966" for this suite. • [SLOW TEST:6.843 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3254,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:33:36.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-149a1b46-e9e4-4d69-8f03-a461589ecabb STEP: Creating a pod to test consume configMaps May 11 16:33:36.649: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b55309d-060a-480d-baa9-b0ca6bbcc851" in namespace "configmap-6188" to be "success or failure" May 11 16:33:36.674: INFO: Pod "pod-configmaps-5b55309d-060a-480d-baa9-b0ca6bbcc851": Phase="Pending", Reason="", readiness=false. Elapsed: 24.637378ms May 11 16:33:38.677: INFO: Pod "pod-configmaps-5b55309d-060a-480d-baa9-b0ca6bbcc851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027984951s May 11 16:33:41.074: INFO: Pod "pod-configmaps-5b55309d-060a-480d-baa9-b0ca6bbcc851": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424611983s May 11 16:33:43.183: INFO: Pod "pod-configmaps-5b55309d-060a-480d-baa9-b0ca6bbcc851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.533644934s STEP: Saw pod success May 11 16:33:43.183: INFO: Pod "pod-configmaps-5b55309d-060a-480d-baa9-b0ca6bbcc851" satisfied condition "success or failure" May 11 16:33:43.242: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5b55309d-060a-480d-baa9-b0ca6bbcc851 container configmap-volume-test: STEP: delete the pod May 11 16:33:43.387: INFO: Waiting for pod pod-configmaps-5b55309d-060a-480d-baa9-b0ca6bbcc851 to disappear May 11 16:33:43.492: INFO: Pod pod-configmaps-5b55309d-060a-480d-baa9-b0ca6bbcc851 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:33:43.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6188" for this suite. • [SLOW TEST:7.019 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3262,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:33:43.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 11 16:33:44.154: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 16:33:44.189: INFO: Waiting for terminating namespaces to be deleted... May 11 16:33:44.191: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 11 16:33:44.199: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 16:33:44.199: INFO: Container kindnet-cni ready: true, restart count 0 May 11 16:33:44.199: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 16:33:44.199: INFO: Container kube-proxy ready: true, restart count 0 May 11 16:33:44.199: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 11 16:33:44.205: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 16:33:44.205: INFO: Container kindnet-cni ready: true, restart count 0 May 11 16:33:44.205: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 11 16:33:44.205: INFO: Container kube-bench ready: false, restart count 0 May 11 16:33:44.205: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 16:33:44.205: INFO: Container kube-proxy ready: true, restart count 0 May 11 16:33:44.205: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 11 16:33:44.205: INFO: Container kube-hunter ready: false, restart count 0 May 11 16:33:44.205: INFO: annotationupdate676096cd-b225-4496-9ded-27a59460bcd5 from projected-8966 started at 2020-05-11 16:33:29 +0000 UTC (1 container statuses recorded) May 11 16:33:44.205: INFO: Container client-container ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 11 16:33:44.456: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 11 16:33:44.456: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 11 16:33:44.456: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 11 16:33:44.456: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 May 11 16:33:44.456: INFO: Pod annotationupdate676096cd-b225-4496-9ded-27a59460bcd5 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 11 16:33:44.456: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 11 16:33:44.566: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-813624d6-455c-4bb5-bcaa-fd8c1658a321.160e06768b1752d3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6070/filler-pod-813624d6-455c-4bb5-bcaa-fd8c1658a321 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-813624d6-455c-4bb5-bcaa-fd8c1658a321.160e06773b719247], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-813624d6-455c-4bb5-bcaa-fd8c1658a321.160e0677fc37698d], Reason = [Created], Message = [Created container filler-pod-813624d6-455c-4bb5-bcaa-fd8c1658a321] STEP: Considering event: Type = [Normal], Name = [filler-pod-813624d6-455c-4bb5-bcaa-fd8c1658a321.160e06781f145993], Reason = [Started], Message = [Started container filler-pod-813624d6-455c-4bb5-bcaa-fd8c1658a321] STEP: Considering event: Type = [Normal], Name = [filler-pod-c93b7f20-0b68-4967-a484-24296cc3cfef.160e0676827fb935], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6070/filler-pod-c93b7f20-0b68-4967-a484-24296cc3cfef to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c93b7f20-0b68-4967-a484-24296cc3cfef.160e067722ac29a1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c93b7f20-0b68-4967-a484-24296cc3cfef.160e0677cf1a5a80], Reason = [Created], Message = [Created container filler-pod-c93b7f20-0b68-4967-a484-24296cc3cfef] STEP: Considering event: Type = [Normal], Name = [filler-pod-c93b7f20-0b68-4967-a484-24296cc3cfef.160e0677f33ca425], Reason = [Started], Message = [Started container filler-pod-c93b7f20-0b68-4967-a484-24296cc3cfef] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e0678713201ae], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:33:54.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6070" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.834 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":206,"skipped":3278,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:33:54.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:34:05.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3500" for this suite. • [SLOW TEST:11.348 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:34:05.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 11 16:34:08.237: INFO: Waiting up to 5m0s for pod "var-expansion-fce77dfd-a998-4725-8e4b-3309af96793a" in namespace "var-expansion-6880" to be "success or failure" May 11 16:34:08.578: INFO: Pod "var-expansion-fce77dfd-a998-4725-8e4b-3309af96793a": Phase="Pending", Reason="", readiness=false. Elapsed: 341.284715ms May 11 16:34:10.715: INFO: Pod "var-expansion-fce77dfd-a998-4725-8e4b-3309af96793a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478005651s May 11 16:34:12.722: INFO: Pod "var-expansion-fce77dfd-a998-4725-8e4b-3309af96793a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485236106s May 11 16:34:14.871: INFO: Pod "var-expansion-fce77dfd-a998-4725-8e4b-3309af96793a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.634322819s STEP: Saw pod success May 11 16:34:14.871: INFO: Pod "var-expansion-fce77dfd-a998-4725-8e4b-3309af96793a" satisfied condition "success or failure" May 11 16:34:14.875: INFO: Trying to get logs from node jerma-worker pod var-expansion-fce77dfd-a998-4725-8e4b-3309af96793a container dapi-container: STEP: delete the pod May 11 16:34:15.082: INFO: Waiting for pod var-expansion-fce77dfd-a998-4725-8e4b-3309af96793a to disappear May 11 16:34:15.147: INFO: Pod var-expansion-fce77dfd-a998-4725-8e4b-3309af96793a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:34:15.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6880" for this suite. • [SLOW TEST:9.471 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3337,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:34:15.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-8db46326-4ce9-473e-9e7a-e7369e35e684 in namespace container-probe-6453 May 11 16:34:23.300: INFO: Started pod busybox-8db46326-4ce9-473e-9e7a-e7369e35e684 in namespace container-probe-6453 STEP: checking the pod's current state and verifying that restartCount is present May 11 16:34:23.303: INFO: Initial restart count of pod busybox-8db46326-4ce9-473e-9e7a-e7369e35e684 is 0 May 11 16:35:09.912: INFO: Restart count of pod container-probe-6453/busybox-8db46326-4ce9-473e-9e7a-e7369e35e684 is now 1 (46.609397185s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:35:09.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6453" for this suite. • [SLOW TEST:54.782 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3338,"failed":0} [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:35:09.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 11 16:35:10.029: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:35:10.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1817" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":210,"skipped":3338,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:35:10.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 16:35:10.228: INFO: Waiting up to 5m0s for pod "pod-7baa3a98-77b4-490b-a9c4-84667bc64bf1" in namespace "emptydir-770" to be "success or failure" May 11 16:35:10.259: INFO: Pod "pod-7baa3a98-77b4-490b-a9c4-84667bc64bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.774095ms May 11 16:35:12.263: INFO: Pod "pod-7baa3a98-77b4-490b-a9c4-84667bc64bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034804403s May 11 16:35:14.268: INFO: Pod "pod-7baa3a98-77b4-490b-a9c4-84667bc64bf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040167182s STEP: Saw pod success May 11 16:35:14.268: INFO: Pod "pod-7baa3a98-77b4-490b-a9c4-84667bc64bf1" satisfied condition "success or failure" May 11 16:35:14.271: INFO: Trying to get logs from node jerma-worker pod pod-7baa3a98-77b4-490b-a9c4-84667bc64bf1 container test-container: STEP: delete the pod May 11 16:35:14.454: INFO: Waiting for pod pod-7baa3a98-77b4-490b-a9c4-84667bc64bf1 to disappear May 11 16:35:14.584: INFO: Pod pod-7baa3a98-77b4-490b-a9c4-84667bc64bf1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:35:14.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-770" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3341,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:35:14.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3077 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3077 STEP: creating replication controller externalsvc in namespace services-3077 I0511 16:35:14.950581 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3077, replica count: 2 I0511 16:35:18.000947 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:35:21.001401 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:35:24.001623 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:35:27.001838 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:35:30.002102 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 11 16:35:30.058: INFO: Creating new exec pod May 11 16:35:34.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3077 execpodvvlpf -- /bin/sh -x -c nslookup clusterip-service' May 11 16:35:34.304: INFO: stderr: "I0511 16:35:34.204500 3100 log.go:172] (0xc00077a0b0) (0xc0009a20a0) Create stream\nI0511 16:35:34.204578 3100 log.go:172] (0xc00077a0b0) (0xc0009a20a0) Stream added, broadcasting: 1\nI0511 16:35:34.207487 3100 log.go:172] (0xc00077a0b0) Reply frame received for 1\nI0511 16:35:34.207564 3100 log.go:172] (0xc00077a0b0) (0xc0009a2140) Create stream\nI0511 16:35:34.207591 3100 log.go:172] (0xc00077a0b0) (0xc0009a2140) Stream added, broadcasting: 3\nI0511 16:35:34.208596 3100 log.go:172] (0xc00077a0b0) Reply frame received for 3\nI0511 16:35:34.208645 3100 log.go:172] (0xc00077a0b0) (0xc0009a2280) Create stream\nI0511 16:35:34.208669 3100 log.go:172] (0xc00077a0b0) (0xc0009a2280) Stream added, broadcasting: 5\nI0511 16:35:34.209868 3100 log.go:172] (0xc00077a0b0) Reply frame received for 5\nI0511 16:35:34.290779 3100 log.go:172] (0xc00077a0b0) Data frame received for 5\nI0511 16:35:34.290806 3100 log.go:172] (0xc0009a2280) (5) Data frame handling\nI0511 16:35:34.290829 3100 log.go:172] (0xc0009a2280) (5) Data frame sent\n+ nslookup clusterip-service\nI0511 16:35:34.296189 3100 log.go:172] (0xc00077a0b0) Data frame received for 3\nI0511 16:35:34.296216 3100 log.go:172] (0xc0009a2140) (3) Data frame handling\nI0511 16:35:34.296232 3100 log.go:172] (0xc0009a2140) (3) Data frame sent\nI0511 16:35:34.297311 3100 log.go:172] (0xc00077a0b0) Data frame received for 3\nI0511 16:35:34.297330 3100 log.go:172] (0xc0009a2140) (3) Data frame handling\nI0511 16:35:34.297347 3100 log.go:172] (0xc0009a2140) (3) Data frame sent\nI0511 16:35:34.297892 3100 log.go:172] (0xc00077a0b0) Data frame received for 3\nI0511 16:35:34.297914 3100 log.go:172] (0xc0009a2140) (3) Data frame handling\nI0511 16:35:34.298044 3100 log.go:172] (0xc00077a0b0) Data frame received for 5\nI0511 16:35:34.298064 3100 log.go:172] (0xc0009a2280) (5) Data frame handling\nI0511 16:35:34.299801 3100 log.go:172] (0xc00077a0b0) Data frame received for 1\nI0511 16:35:34.299818 3100 log.go:172] (0xc0009a20a0) (1) Data frame handling\nI0511 16:35:34.299833 3100 log.go:172] (0xc0009a20a0) (1) Data frame sent\nI0511 16:35:34.299848 3100 log.go:172] (0xc00077a0b0) (0xc0009a20a0) Stream removed, broadcasting: 1\nI0511 16:35:34.299866 3100 log.go:172] (0xc00077a0b0) Go away received\nI0511 16:35:34.300170 3100 log.go:172] (0xc00077a0b0) (0xc0009a20a0) Stream removed, broadcasting: 1\nI0511 16:35:34.300189 3100 log.go:172] (0xc00077a0b0) (0xc0009a2140) Stream removed, broadcasting: 3\nI0511 16:35:34.300201 3100 log.go:172] (0xc00077a0b0) (0xc0009a2280) Stream removed, broadcasting: 5\n" May 11 16:35:34.304: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3077.svc.cluster.local\tcanonical name = externalsvc.services-3077.svc.cluster.local.\nName:\texternalsvc.services-3077.svc.cluster.local\nAddress: 10.97.254.142\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3077, will wait for the garbage collector to delete the pods May 11 16:35:34.364: INFO: Deleting ReplicationController externalsvc took: 7.127131ms May 11 16:35:34.664: INFO: Terminating ReplicationController externalsvc pods took: 300.271193ms May 11 16:35:49.645: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:35:49.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3077" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:35.081 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":212,"skipped":3361,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:35:49.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 16:35:49.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6449' May 11 16:35:49.903: INFO: stderr: "" May 11 16:35:49.903: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 11 16:35:54.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6449 -o json' May 11 16:35:55.250: INFO: stderr: "" May 11 16:35:55.251: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-11T16:35:49Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6449\",\n \"resourceVersion\": \"15284806\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6449/pods/e2e-test-httpd-pod\",\n \"uid\": \"dc647651-ce33-44d2-b8ab-f4f2e28d39af\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-9qgzl\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-9qgzl\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-9qgzl\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T16:35:49Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T16:35:53Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T16:35:53Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T16:35:49Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://7f4e288dfb4e09d905f88c39b02a76c5678097c12c410ac254b6692c9ad857d7\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-11T16:35:53Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.73\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.73\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-11T16:35:49Z\"\n }\n}\n" STEP: replace the image in the pod May 11 16:35:55.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6449' May 11 16:35:55.845: INFO: stderr: "" May 11 16:35:55.845: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 11 16:35:55.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6449' May 11 16:36:09.530: INFO: stderr: "" May 11 16:36:09.530: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:36:09.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6449" for this suite. • [SLOW TEST:20.077 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":213,"skipped":3380,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:36:09.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:36:27.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7459" for this suite. • [SLOW TEST:17.794 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":214,"skipped":3384,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:36:27.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:36:27.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9417" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3385,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:36:27.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 11 16:36:28.189: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4221 /api/v1/namespaces/watch-4221/configmaps/e2e-watch-test-label-changed 053605b2-2f5c-44b0-87ec-1ae869ffbdaf 15284993 0 2020-05-11 16:36:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 16:36:28.190: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4221 /api/v1/namespaces/watch-4221/configmaps/e2e-watch-test-label-changed 053605b2-2f5c-44b0-87ec-1ae869ffbdaf 15284994 0 2020-05-11 16:36:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 11 16:36:28.190: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4221 /api/v1/namespaces/watch-4221/configmaps/e2e-watch-test-label-changed 053605b2-2f5c-44b0-87ec-1ae869ffbdaf 15284995 0 2020-05-11 16:36:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 11 16:36:38.374: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4221 /api/v1/namespaces/watch-4221/configmaps/e2e-watch-test-label-changed 053605b2-2f5c-44b0-87ec-1ae869ffbdaf 15285046 0 2020-05-11 16:36:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 16:36:38.374: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4221 /api/v1/namespaces/watch-4221/configmaps/e2e-watch-test-label-changed 053605b2-2f5c-44b0-87ec-1ae869ffbdaf 15285047 0 2020-05-11 16:36:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 11 16:36:38.374: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4221 /api/v1/namespaces/watch-4221/configmaps/e2e-watch-test-label-changed 053605b2-2f5c-44b0-87ec-1ae869ffbdaf 15285048 0 2020-05-11 16:36:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:36:38.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4221" for this suite. • [SLOW TEST:10.562 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":216,"skipped":3404,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:36:38.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 11 16:36:38.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5156' May 11 16:36:39.002: INFO: stderr: "" May 11 16:36:39.002: INFO: stdout: "pod/pause created\n" May 11 16:36:39.002: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 11 16:36:39.002: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5156" to be "running and ready" May 11 16:36:39.055: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 52.818773ms May 11 16:36:41.059: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056950412s May 11 16:36:43.071: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068943841s May 11 16:36:45.075: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.072940583s May 11 16:36:45.075: INFO: Pod "pause" satisfied condition "running and ready" May 11 16:36:45.075: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 11 16:36:45.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5156' May 11 16:36:45.171: INFO: stderr: "" May 11 16:36:45.171: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 11 16:36:45.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5156' May 11 16:36:45.272: INFO: stderr: "" May 11 16:36:45.272: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod May 11 16:36:45.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5156' May 11 16:36:45.367: INFO: stderr: "" May 11 16:36:45.367: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 11 16:36:45.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5156' May 11 16:36:45.463: INFO: stderr: "" May 11 16:36:45.463: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 11 16:36:45.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5156' May 11 16:36:45.577: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 16:36:45.577: INFO: stdout: "pod \"pause\" force deleted\n" May 11 16:36:45.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5156' May 11 16:36:45.672: INFO: stderr: "No resources found in kubectl-5156 namespace.\n" May 11 16:36:45.672: INFO: stdout: "" May 11 16:36:45.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5156 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 16:36:45.761: INFO: stderr: "" May 11 16:36:45.761: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:36:45.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5156" for this suite. • [SLOW TEST:7.284 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":217,"skipped":3409,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:36:45.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ba7c2844-9efb-4fd9-a4ed-bf9bd1358f60 STEP: Creating a pod to test consume secrets May 11 16:36:46.670: INFO: Waiting up to 5m0s for pod "pod-secrets-37c4d644-4e32-4868-95dd-f1596cb9feeb" in namespace "secrets-4606" to be "success or failure" May 11 16:36:46.726: INFO: Pod "pod-secrets-37c4d644-4e32-4868-95dd-f1596cb9feeb": Phase="Pending", Reason="", readiness=false. Elapsed: 55.909697ms May 11 16:36:48.844: INFO: Pod "pod-secrets-37c4d644-4e32-4868-95dd-f1596cb9feeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174411583s May 11 16:36:50.966: INFO: Pod "pod-secrets-37c4d644-4e32-4868-95dd-f1596cb9feeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296104358s May 11 16:36:52.969: INFO: Pod "pod-secrets-37c4d644-4e32-4868-95dd-f1596cb9feeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.298917044s STEP: Saw pod success May 11 16:36:52.969: INFO: Pod "pod-secrets-37c4d644-4e32-4868-95dd-f1596cb9feeb" satisfied condition "success or failure" May 11 16:36:52.971: INFO: Trying to get logs from node jerma-worker pod pod-secrets-37c4d644-4e32-4868-95dd-f1596cb9feeb container secret-volume-test: STEP: delete the pod May 11 16:36:53.074: INFO: Waiting for pod pod-secrets-37c4d644-4e32-4868-95dd-f1596cb9feeb to disappear May 11 16:36:53.110: INFO: Pod pod-secrets-37c4d644-4e32-4868-95dd-f1596cb9feeb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:36:53.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4606" for this suite. • [SLOW TEST:7.349 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:36:53.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 11 16:37:05.478: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4827 PodName:pod-sharedvolume-17541a2c-571c-42ad-8014-2e9f8f86f01f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:37:05.478: INFO: >>> kubeConfig: /root/.kube/config I0511 16:37:05.499439 6 log.go:172] (0xc0022a5130) (0xc00143ec80) Create stream I0511 16:37:05.499462 6 log.go:172] (0xc0022a5130) (0xc00143ec80) Stream added, broadcasting: 1 I0511 16:37:05.501079 6 log.go:172] (0xc0022a5130) Reply frame received for 1 I0511 16:37:05.501108 6 log.go:172] (0xc0022a5130) (0xc0014290e0) Create stream I0511 16:37:05.501253 6 log.go:172] (0xc0022a5130) (0xc0014290e0) Stream added, broadcasting: 3 I0511 16:37:05.502120 6 log.go:172] (0xc0022a5130) Reply frame received for 3 I0511 16:37:05.502144 6 log.go:172] (0xc0022a5130) (0xc001f22280) Create stream I0511 16:37:05.502150 6 log.go:172] (0xc0022a5130) (0xc001f22280) Stream added, broadcasting: 5 I0511 16:37:05.502994 6 log.go:172] (0xc0022a5130) Reply frame received for 5 I0511 16:37:05.564169 6 log.go:172] (0xc0022a5130) Data frame received for 3 I0511 16:37:05.564221 6 log.go:172] (0xc0014290e0) (3) Data frame handling I0511 16:37:05.564245 6 log.go:172] (0xc0014290e0) (3) Data frame sent I0511 16:37:05.564263 6 log.go:172] (0xc0022a5130) Data frame received for 3 I0511 16:37:05.564295 6 log.go:172] (0xc0014290e0) (3) Data frame handling I0511 16:37:05.564569 6 log.go:172] (0xc0022a5130) Data frame received for 5 I0511 16:37:05.564593 6 log.go:172] (0xc001f22280) (5) Data frame handling I0511 16:37:05.566594 6 log.go:172] (0xc0022a5130) Data frame received for 1 I0511 16:37:05.566637 6 log.go:172] (0xc00143ec80) (1) Data frame handling I0511 16:37:05.566667 6 log.go:172] (0xc00143ec80) (1) Data frame sent I0511 16:37:05.566688 6 log.go:172] (0xc0022a5130) (0xc00143ec80) Stream removed, broadcasting: 1 I0511 16:37:05.566722 6 log.go:172] (0xc0022a5130) Go away received I0511 16:37:05.566886 6 log.go:172] (0xc0022a5130) (0xc00143ec80) Stream removed, broadcasting: 1 I0511 16:37:05.566915 6 log.go:172] (0xc0022a5130) (0xc0014290e0) Stream removed, broadcasting: 3 I0511 16:37:05.566937 6 log.go:172] (0xc0022a5130) (0xc001f22280) Stream removed, broadcasting: 5 May 11 16:37:05.566: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:37:05.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4827" for this suite. • [SLOW TEST:12.457 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":219,"skipped":3450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:37:05.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:37:17.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7762" for this suite. • [SLOW TEST:11.862 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":220,"skipped":3477,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:37:17.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:37:18.258: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4623c8a9-7336-41c3-a748-5d397309be7e" in namespace "projected-2471" to be "success or failure" May 11 16:37:18.274: INFO: Pod "downwardapi-volume-4623c8a9-7336-41c3-a748-5d397309be7e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.889938ms May 11 16:37:20.279: INFO: Pod "downwardapi-volume-4623c8a9-7336-41c3-a748-5d397309be7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020167496s May 11 16:37:22.281: INFO: Pod "downwardapi-volume-4623c8a9-7336-41c3-a748-5d397309be7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022831588s May 11 16:37:24.284: INFO: Pod "downwardapi-volume-4623c8a9-7336-41c3-a748-5d397309be7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02612623s STEP: Saw pod success May 11 16:37:24.285: INFO: Pod "downwardapi-volume-4623c8a9-7336-41c3-a748-5d397309be7e" satisfied condition "success or failure" May 11 16:37:24.287: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4623c8a9-7336-41c3-a748-5d397309be7e container client-container: STEP: delete the pod May 11 16:37:24.440: INFO: Waiting for pod downwardapi-volume-4623c8a9-7336-41c3-a748-5d397309be7e to disappear May 11 16:37:24.451: INFO: Pod downwardapi-volume-4623c8a9-7336-41c3-a748-5d397309be7e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:37:24.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2471" for this suite. • [SLOW TEST:7.027 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3482,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:37:24.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 11 16:37:24.816: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:37:35.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6429" for this suite. • [SLOW TEST:11.428 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":222,"skipped":3487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:37:35.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:38:02.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8887" for this suite. • [SLOW TEST:26.349 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":223,"skipped":3535,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:38:02.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-2b4fe5b5-f25a-4609-a350-5314c301827f in namespace container-probe-3618 May 11 16:38:07.600: INFO: Started pod liveness-2b4fe5b5-f25a-4609-a350-5314c301827f in namespace container-probe-3618 STEP: checking the pod's current state and verifying that restartCount is present May 11 16:38:07.676: INFO: Initial restart count of pod liveness-2b4fe5b5-f25a-4609-a350-5314c301827f is 0 May 11 16:38:28.318: INFO: Restart count of pod container-probe-3618/liveness-2b4fe5b5-f25a-4609-a350-5314c301827f is now 1 (20.641649006s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:38:28.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3618" for this suite. • [SLOW TEST:26.150 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3554,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:38:28.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 16:38:28.639: INFO: Waiting up to 5m0s for pod "pod-f505e330-57bc-4e3a-8a73-41aed1f19982" in namespace "emptydir-2092" to be "success or failure" May 11 16:38:28.723: INFO: Pod "pod-f505e330-57bc-4e3a-8a73-41aed1f19982": Phase="Pending", Reason="", readiness=false. Elapsed: 83.998924ms May 11 16:38:30.829: INFO: Pod "pod-f505e330-57bc-4e3a-8a73-41aed1f19982": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190122687s May 11 16:38:32.869: INFO: Pod "pod-f505e330-57bc-4e3a-8a73-41aed1f19982": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229751163s May 11 16:38:34.941: INFO: Pod "pod-f505e330-57bc-4e3a-8a73-41aed1f19982": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.302385193s STEP: Saw pod success May 11 16:38:34.941: INFO: Pod "pod-f505e330-57bc-4e3a-8a73-41aed1f19982" satisfied condition "success or failure" May 11 16:38:34.943: INFO: Trying to get logs from node jerma-worker2 pod pod-f505e330-57bc-4e3a-8a73-41aed1f19982 container test-container: STEP: delete the pod May 11 16:38:35.006: INFO: Waiting for pod pod-f505e330-57bc-4e3a-8a73-41aed1f19982 to disappear May 11 16:38:35.084: INFO: Pod pod-f505e330-57bc-4e3a-8a73-41aed1f19982 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:38:35.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2092" for this suite. • [SLOW TEST:6.697 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:38:35.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:38:35.157: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c13caca5-5971-450c-a6e4-fb0ed3fedf6f" in namespace "projected-7944" to be "success or failure" May 11 16:38:35.160: INFO: Pod "downwardapi-volume-c13caca5-5971-450c-a6e4-fb0ed3fedf6f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.349439ms May 11 16:38:37.172: INFO: Pod "downwardapi-volume-c13caca5-5971-450c-a6e4-fb0ed3fedf6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015504492s May 11 16:38:39.176: INFO: Pod "downwardapi-volume-c13caca5-5971-450c-a6e4-fb0ed3fedf6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019314982s STEP: Saw pod success May 11 16:38:39.176: INFO: Pod "downwardapi-volume-c13caca5-5971-450c-a6e4-fb0ed3fedf6f" satisfied condition "success or failure" May 11 16:38:39.179: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c13caca5-5971-450c-a6e4-fb0ed3fedf6f container client-container: STEP: delete the pod May 11 16:38:39.247: INFO: Waiting for pod downwardapi-volume-c13caca5-5971-450c-a6e4-fb0ed3fedf6f to disappear May 11 16:38:39.263: INFO: Pod downwardapi-volume-c13caca5-5971-450c-a6e4-fb0ed3fedf6f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:38:39.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7944" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3580,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:38:39.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 11 16:38:39.511: INFO: Waiting up to 5m0s for pod "downward-api-7339cf26-36f4-4d7f-80c6-3e4a316f548f" in namespace "downward-api-4516" to be "success or failure" May 11 16:38:39.521: INFO: Pod "downward-api-7339cf26-36f4-4d7f-80c6-3e4a316f548f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.426355ms May 11 16:38:41.546: INFO: Pod "downward-api-7339cf26-36f4-4d7f-80c6-3e4a316f548f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034391711s May 11 16:38:43.653: INFO: Pod "downward-api-7339cf26-36f4-4d7f-80c6-3e4a316f548f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141700996s STEP: Saw pod success May 11 16:38:43.653: INFO: Pod "downward-api-7339cf26-36f4-4d7f-80c6-3e4a316f548f" satisfied condition "success or failure" May 11 16:38:43.815: INFO: Trying to get logs from node jerma-worker pod downward-api-7339cf26-36f4-4d7f-80c6-3e4a316f548f container dapi-container: STEP: delete the pod May 11 16:38:43.856: INFO: Waiting for pod downward-api-7339cf26-36f4-4d7f-80c6-3e4a316f548f to disappear May 11 16:38:43.868: INFO: Pod downward-api-7339cf26-36f4-4d7f-80c6-3e4a316f548f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:38:43.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4516" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3586,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:38:43.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-03b6f895-1b19-47e0-8864-3243a85d0907 STEP: Creating a pod to test consume configMaps May 11 16:38:44.153: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4105955-21fe-4fa0-bc32-3d3ffadde812" in namespace "configmap-8387" to be "success or failure" May 11 16:38:44.216: INFO: Pod "pod-configmaps-d4105955-21fe-4fa0-bc32-3d3ffadde812": Phase="Pending", Reason="", readiness=false. Elapsed: 62.841781ms May 11 16:38:46.351: INFO: Pod "pod-configmaps-d4105955-21fe-4fa0-bc32-3d3ffadde812": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197462183s May 11 16:38:48.509: INFO: Pod "pod-configmaps-d4105955-21fe-4fa0-bc32-3d3ffadde812": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355523504s May 11 16:38:50.514: INFO: Pod "pod-configmaps-d4105955-21fe-4fa0-bc32-3d3ffadde812": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.360266076s STEP: Saw pod success May 11 16:38:50.514: INFO: Pod "pod-configmaps-d4105955-21fe-4fa0-bc32-3d3ffadde812" satisfied condition "success or failure" May 11 16:38:50.516: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d4105955-21fe-4fa0-bc32-3d3ffadde812 container configmap-volume-test: STEP: delete the pod May 11 16:38:51.137: INFO: Waiting for pod pod-configmaps-d4105955-21fe-4fa0-bc32-3d3ffadde812 to disappear May 11 16:38:51.247: INFO: Pod pod-configmaps-d4105955-21fe-4fa0-bc32-3d3ffadde812 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:38:51.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8387" for this suite. • [SLOW TEST:7.378 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3594,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:38:51.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 16:38:51.525: INFO: Waiting up to 5m0s for pod "pod-0aa3301b-83bb-44c4-85db-a55343d29fd0" in namespace "emptydir-4824" to be "success or failure" May 11 16:38:51.548: INFO: Pod "pod-0aa3301b-83bb-44c4-85db-a55343d29fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.418976ms May 11 16:38:53.721: INFO: Pod "pod-0aa3301b-83bb-44c4-85db-a55343d29fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195795895s May 11 16:38:55.909: INFO: Pod "pod-0aa3301b-83bb-44c4-85db-a55343d29fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384091566s May 11 16:38:57.912: INFO: Pod "pod-0aa3301b-83bb-44c4-85db-a55343d29fd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.386847727s STEP: Saw pod success May 11 16:38:57.912: INFO: Pod "pod-0aa3301b-83bb-44c4-85db-a55343d29fd0" satisfied condition "success or failure" May 11 16:38:57.914: INFO: Trying to get logs from node jerma-worker pod pod-0aa3301b-83bb-44c4-85db-a55343d29fd0 container test-container: STEP: delete the pod May 11 16:38:58.219: INFO: Waiting for pod pod-0aa3301b-83bb-44c4-85db-a55343d29fd0 to disappear May 11 16:38:58.256: INFO: Pod pod-0aa3301b-83bb-44c4-85db-a55343d29fd0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:38:58.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4824" for this suite. • [SLOW TEST:7.047 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3597,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:38:58.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:38:58.607: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fee8466-b183-4e57-8ba2-5cbe284bffc8" in namespace "projected-7189" to be "success or failure" May 11 16:38:58.610: INFO: Pod "downwardapi-volume-0fee8466-b183-4e57-8ba2-5cbe284bffc8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.047782ms May 11 16:39:00.841: INFO: Pod "downwardapi-volume-0fee8466-b183-4e57-8ba2-5cbe284bffc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234145267s May 11 16:39:02.844: INFO: Pod "downwardapi-volume-0fee8466-b183-4e57-8ba2-5cbe284bffc8": Phase="Running", Reason="", readiness=true. Elapsed: 4.237569815s May 11 16:39:04.848: INFO: Pod "downwardapi-volume-0fee8466-b183-4e57-8ba2-5cbe284bffc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.241229487s STEP: Saw pod success May 11 16:39:04.848: INFO: Pod "downwardapi-volume-0fee8466-b183-4e57-8ba2-5cbe284bffc8" satisfied condition "success or failure" May 11 16:39:04.851: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0fee8466-b183-4e57-8ba2-5cbe284bffc8 container client-container: STEP: delete the pod May 11 16:39:04.906: INFO: Waiting for pod downwardapi-volume-0fee8466-b183-4e57-8ba2-5cbe284bffc8 to disappear May 11 16:39:04.989: INFO: Pod downwardapi-volume-0fee8466-b183-4e57-8ba2-5cbe284bffc8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:39:04.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7189" for this suite. • [SLOW TEST:6.693 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3610,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:39:04.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-f4ef4ffb-81cf-4786-875f-242376bcebba STEP: Creating a pod to test consume secrets May 11 16:39:05.205: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d7c9a87e-474e-4263-8ce2-c08cafcdb455" in namespace "projected-9700" to be "success or failure" May 11 16:39:05.210: INFO: Pod "pod-projected-secrets-d7c9a87e-474e-4263-8ce2-c08cafcdb455": Phase="Pending", Reason="", readiness=false. Elapsed: 4.94091ms May 11 16:39:07.331: INFO: Pod "pod-projected-secrets-d7c9a87e-474e-4263-8ce2-c08cafcdb455": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125165191s May 11 16:39:09.337: INFO: Pod "pod-projected-secrets-d7c9a87e-474e-4263-8ce2-c08cafcdb455": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131600102s STEP: Saw pod success May 11 16:39:09.337: INFO: Pod "pod-projected-secrets-d7c9a87e-474e-4263-8ce2-c08cafcdb455" satisfied condition "success or failure" May 11 16:39:09.340: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-d7c9a87e-474e-4263-8ce2-c08cafcdb455 container projected-secret-volume-test: STEP: delete the pod May 11 16:39:09.388: INFO: Waiting for pod pod-projected-secrets-d7c9a87e-474e-4263-8ce2-c08cafcdb455 to disappear May 11 16:39:09.438: INFO: Pod pod-projected-secrets-d7c9a87e-474e-4263-8ce2-c08cafcdb455 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:39:09.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9700" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3615,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:39:09.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-6794 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6794 to expose endpoints map[] May 11 16:39:10.112: INFO: Get endpoints failed (26.039581ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 11 16:39:11.115: INFO: successfully validated that service endpoint-test2 in namespace services-6794 exposes endpoints map[] (1.029574362s elapsed) STEP: Creating pod pod1 in namespace services-6794 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6794 to expose endpoints map[pod1:[80]] May 11 16:39:15.575: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.453329786s elapsed, will retry) May 11 16:39:16.579: INFO: successfully validated that service endpoint-test2 in namespace services-6794 exposes endpoints map[pod1:[80]] (5.45761565s elapsed) STEP: Creating pod pod2 in namespace services-6794 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6794 to expose endpoints map[pod1:[80] pod2:[80]] May 11 16:39:22.607: INFO: Unexpected endpoints: found map[6091ea04-1ae8-4830-9d12-6c18270c2593:[80]], expected map[pod1:[80] pod2:[80]] (6.025369537s elapsed, will retry) May 11 16:39:23.615: INFO: successfully validated that service endpoint-test2 in namespace services-6794 exposes endpoints map[pod1:[80] pod2:[80]] (7.033338583s elapsed) STEP: Deleting pod pod1 in namespace services-6794 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6794 to expose endpoints map[pod2:[80]] May 11 16:39:24.737: INFO: successfully validated that service endpoint-test2 in namespace services-6794 exposes endpoints map[pod2:[80]] (1.118178433s elapsed) STEP: Deleting pod pod2 in namespace services-6794 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6794 to expose endpoints map[] May 11 16:39:25.786: INFO: successfully validated that service endpoint-test2 in namespace services-6794 exposes endpoints map[] (1.044385567s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:39:25.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6794" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:16.706 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":232,"skipped":3629,"failed":0} [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:39:26.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-5875 STEP: creating replication controller nodeport-test in namespace services-5875 I0511 16:39:27.182672 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-5875, replica count: 2 I0511 16:39:30.233068 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:39:33.233335 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:39:36.233531 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 16:39:36.233: INFO: Creating new exec pod May 11 16:39:45.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5875 execpodtdhct -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 11 16:39:54.643: INFO: stderr: "I0511 16:39:54.556257 3359 log.go:172] (0xc00010ad10) (0xc0006f5d60) Create stream\nI0511 16:39:54.556294 3359 log.go:172] (0xc00010ad10) (0xc0006f5d60) Stream added, broadcasting: 1\nI0511 16:39:54.563483 3359 log.go:172] (0xc00010ad10) Reply frame received for 1\nI0511 16:39:54.563563 3359 log.go:172] (0xc00010ad10) (0xc0006a0640) Create stream\nI0511 16:39:54.563666 3359 log.go:172] (0xc00010ad10) (0xc0006a0640) Stream added, broadcasting: 3\nI0511 16:39:54.569314 3359 log.go:172] (0xc00010ad10) Reply frame received for 3\nI0511 16:39:54.569347 3359 log.go:172] (0xc00010ad10) (0xc00052b4a0) Create stream\nI0511 16:39:54.569365 3359 log.go:172] (0xc00010ad10) (0xc00052b4a0) Stream added, broadcasting: 5\nI0511 16:39:54.570600 3359 log.go:172] (0xc00010ad10) Reply frame received for 5\nI0511 16:39:54.637351 3359 log.go:172] (0xc00010ad10) Data frame received for 3\nI0511 16:39:54.637384 3359 log.go:172] (0xc0006a0640) (3) Data frame handling\nI0511 16:39:54.637404 3359 log.go:172] (0xc00010ad10) Data frame received for 5\nI0511 16:39:54.637418 3359 log.go:172] (0xc00052b4a0) (5) Data frame handling\nI0511 16:39:54.637430 3359 log.go:172] (0xc00052b4a0) (5) Data frame sent\nI0511 16:39:54.637437 3359 log.go:172] (0xc00010ad10) Data frame received for 5\nI0511 16:39:54.637443 3359 log.go:172] (0xc00052b4a0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0511 16:39:54.638588 3359 log.go:172] (0xc00010ad10) Data frame received for 1\nI0511 16:39:54.638650 3359 log.go:172] (0xc0006f5d60) (1) Data frame handling\nI0511 16:39:54.638664 3359 log.go:172] (0xc0006f5d60) (1) Data frame sent\nI0511 16:39:54.638678 3359 log.go:172] (0xc00010ad10) (0xc0006f5d60) Stream removed, broadcasting: 1\nI0511 16:39:54.638695 3359 log.go:172] (0xc00010ad10) Go away received\nI0511 16:39:54.638953 3359 log.go:172] (0xc00010ad10) (0xc0006f5d60) Stream removed, broadcasting: 1\nI0511 16:39:54.638966 3359 log.go:172] (0xc00010ad10) (0xc0006a0640) Stream removed, broadcasting: 3\nI0511 16:39:54.638972 3359 log.go:172] (0xc00010ad10) (0xc00052b4a0) Stream removed, broadcasting: 5\n" May 11 16:39:54.643: INFO: stdout: "" May 11 16:39:54.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5875 execpodtdhct -- /bin/sh -x -c nc -zv -t -w 2 10.104.212.206 80' May 11 16:39:54.915: INFO: stderr: "I0511 16:39:54.779699 3394 log.go:172] (0xc0000ec580) (0xc0008b6000) Create stream\nI0511 16:39:54.779750 3394 log.go:172] (0xc0000ec580) (0xc0008b6000) Stream added, broadcasting: 1\nI0511 16:39:54.781995 3394 log.go:172] (0xc0000ec580) Reply frame received for 1\nI0511 16:39:54.782033 3394 log.go:172] (0xc0000ec580) (0xc0008b60a0) Create stream\nI0511 16:39:54.782048 3394 log.go:172] (0xc0000ec580) (0xc0008b60a0) Stream added, broadcasting: 3\nI0511 16:39:54.782922 3394 log.go:172] (0xc0000ec580) Reply frame received for 3\nI0511 16:39:54.782948 3394 log.go:172] (0xc0000ec580) (0xc0008b61e0) Create stream\nI0511 16:39:54.782956 3394 log.go:172] (0xc0000ec580) (0xc0008b61e0) Stream added, broadcasting: 5\nI0511 16:39:54.783797 3394 log.go:172] (0xc0000ec580) Reply frame received for 5\nI0511 16:39:54.908229 3394 log.go:172] (0xc0000ec580) Data frame received for 3\nI0511 16:39:54.908270 3394 log.go:172] (0xc0008b60a0) (3) Data frame handling\nI0511 16:39:54.908301 3394 log.go:172] (0xc0000ec580) Data frame received for 5\nI0511 16:39:54.908325 3394 log.go:172] (0xc0008b61e0) (5) Data frame handling\nI0511 16:39:54.908343 3394 log.go:172] (0xc0008b61e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.104.212.206 80\nConnection to 10.104.212.206 80 port [tcp/http] succeeded!\nI0511 16:39:54.908357 3394 log.go:172] (0xc0000ec580) Data frame received for 5\nI0511 16:39:54.908403 3394 log.go:172] (0xc0008b61e0) (5) Data frame handling\nI0511 16:39:54.909105 3394 log.go:172] (0xc0000ec580) Data frame received for 1\nI0511 16:39:54.909479 3394 log.go:172] (0xc0008b6000) (1) Data frame handling\nI0511 16:39:54.909503 3394 log.go:172] (0xc0008b6000) (1) Data frame sent\nI0511 16:39:54.909526 3394 log.go:172] (0xc0000ec580) (0xc0008b6000) Stream removed, broadcasting: 1\nI0511 16:39:54.909570 3394 log.go:172] (0xc0000ec580) Go away received\nI0511 16:39:54.910066 3394 log.go:172] (0xc0000ec580) (0xc0008b6000) Stream removed, broadcasting: 1\nI0511 16:39:54.910086 3394 log.go:172] (0xc0000ec580) (0xc0008b60a0) Stream removed, broadcasting: 3\nI0511 16:39:54.910095 3394 log.go:172] (0xc0000ec580) (0xc0008b61e0) Stream removed, broadcasting: 5\n" May 11 16:39:54.915: INFO: stdout: "" May 11 16:39:54.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5875 execpodtdhct -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30994' May 11 16:39:55.083: INFO: stderr: "I0511 16:39:55.021608 3414 log.go:172] (0xc000acd970) (0xc000b00320) Create stream\nI0511 16:39:55.021642 3414 log.go:172] (0xc000acd970) (0xc000b00320) Stream added, broadcasting: 1\nI0511 16:39:55.023286 3414 log.go:172] (0xc000acd970) Reply frame received for 1\nI0511 16:39:55.023321 3414 log.go:172] (0xc000acd970) (0xc000ac4000) Create stream\nI0511 16:39:55.023333 3414 log.go:172] (0xc000acd970) (0xc000ac4000) Stream added, broadcasting: 3\nI0511 16:39:55.024259 3414 log.go:172] (0xc000acd970) Reply frame received for 3\nI0511 16:39:55.024318 3414 log.go:172] (0xc000acd970) (0xc0008a8000) Create stream\nI0511 16:39:55.024354 3414 log.go:172] (0xc000acd970) (0xc0008a8000) Stream added, broadcasting: 5\nI0511 16:39:55.031168 3414 log.go:172] (0xc000acd970) Reply frame received for 5\nI0511 16:39:55.077054 3414 log.go:172] (0xc000acd970) Data frame received for 5\nI0511 16:39:55.077067 3414 log.go:172] (0xc0008a8000) (5) Data frame handling\nI0511 16:39:55.077074 3414 log.go:172] (0xc0008a8000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30994\nI0511 16:39:55.077554 3414 log.go:172] (0xc000acd970) Data frame received for 5\nI0511 16:39:55.077582 3414 log.go:172] (0xc0008a8000) (5) Data frame handling\nI0511 16:39:55.077607 3414 log.go:172] (0xc0008a8000) (5) Data frame sent\nConnection to 172.17.0.10 30994 port [tcp/30994] succeeded!\nI0511 16:39:55.077941 3414 log.go:172] (0xc000acd970) Data frame received for 5\nI0511 16:39:55.077958 3414 log.go:172] (0xc0008a8000) (5) Data frame handling\nI0511 16:39:55.078068 3414 log.go:172] (0xc000acd970) Data frame received for 3\nI0511 16:39:55.078075 3414 log.go:172] (0xc000ac4000) (3) Data frame handling\nI0511 16:39:55.079270 3414 log.go:172] (0xc000acd970) Data frame received for 1\nI0511 16:39:55.079296 3414 log.go:172] (0xc000b00320) (1) Data frame handling\nI0511 16:39:55.079358 3414 log.go:172] (0xc000b00320) (1) Data frame sent\nI0511 16:39:55.079377 3414 log.go:172] (0xc000acd970) (0xc000b00320) Stream removed, broadcasting: 1\nI0511 16:39:55.079392 3414 log.go:172] (0xc000acd970) Go away received\nI0511 16:39:55.079723 3414 log.go:172] (0xc000acd970) (0xc000b00320) Stream removed, broadcasting: 1\nI0511 16:39:55.079739 3414 log.go:172] (0xc000acd970) (0xc000ac4000) Stream removed, broadcasting: 3\nI0511 16:39:55.079749 3414 log.go:172] (0xc000acd970) (0xc0008a8000) Stream removed, broadcasting: 5\n" May 11 16:39:55.083: INFO: stdout: "" May 11 16:39:55.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5875 execpodtdhct -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30994' May 11 16:39:55.253: INFO: stderr: "I0511 16:39:55.192664 3433 log.go:172] (0xc000a94000) (0xc000aaa000) Create stream\nI0511 16:39:55.192705 3433 log.go:172] (0xc000a94000) (0xc000aaa000) Stream added, broadcasting: 1\nI0511 16:39:55.194893 3433 log.go:172] (0xc000a94000) Reply frame received for 1\nI0511 16:39:55.194927 3433 log.go:172] (0xc000a94000) (0xc000adc000) Create stream\nI0511 16:39:55.194938 3433 log.go:172] (0xc000a94000) (0xc000adc000) Stream added, broadcasting: 3\nI0511 16:39:55.195836 3433 log.go:172] (0xc000a94000) Reply frame received for 3\nI0511 16:39:55.197363 3433 log.go:172] (0xc000a94000) (0xc000a1e000) Create stream\nI0511 16:39:55.197402 3433 log.go:172] (0xc000a94000) (0xc000a1e000) Stream added, broadcasting: 5\nI0511 16:39:55.198371 3433 log.go:172] (0xc000a94000) Reply frame received for 5\nI0511 16:39:55.248258 3433 log.go:172] (0xc000a94000) Data frame received for 5\nI0511 16:39:55.248307 3433 log.go:172] (0xc000a1e000) (5) Data frame handling\nI0511 16:39:55.248326 3433 log.go:172] (0xc000a1e000) (5) Data frame sent\nI0511 16:39:55.248336 3433 log.go:172] (0xc000a94000) Data frame received for 5\nI0511 16:39:55.248345 3433 log.go:172] (0xc000a1e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30994\nConnection to 172.17.0.8 30994 port [tcp/30994] succeeded!\nI0511 16:39:55.248373 3433 log.go:172] (0xc000a94000) Data frame received for 3\nI0511 16:39:55.248389 3433 log.go:172] (0xc000adc000) (3) Data frame handling\nI0511 16:39:55.249661 3433 log.go:172] (0xc000a94000) Data frame received for 1\nI0511 16:39:55.249688 3433 log.go:172] (0xc000aaa000) (1) Data frame handling\nI0511 16:39:55.249706 3433 log.go:172] (0xc000aaa000) (1) Data frame sent\nI0511 16:39:55.249732 3433 log.go:172] (0xc000a94000) (0xc000aaa000) Stream removed, broadcasting: 1\nI0511 16:39:55.249757 3433 log.go:172] (0xc000a94000) Go away received\nI0511 16:39:55.250187 3433 log.go:172] (0xc000a94000) (0xc000aaa000) Stream removed, broadcasting: 1\nI0511 16:39:55.250225 3433 log.go:172] (0xc000a94000) (0xc000adc000) Stream removed, broadcasting: 3\nI0511 16:39:55.250237 3433 log.go:172] (0xc000a94000) (0xc000a1e000) Stream removed, broadcasting: 5\n" May 11 16:39:55.253: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:39:55.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5875" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:28.993 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":233,"skipped":3629,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:39:55.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-b6ccccd8-2827-4bf4-936e-ee6ae2a28d6a STEP: Creating configMap with name cm-test-opt-upd-62440203-0894-4703-a931-f0467e1ade03 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b6ccccd8-2827-4bf4-936e-ee6ae2a28d6a STEP: Updating configmap cm-test-opt-upd-62440203-0894-4703-a931-f0467e1ade03 STEP: Creating configMap with name cm-test-opt-create-43e49cd5-eefe-44fd-b74d-424f5a90f750 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:41:39.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9903" for this suite. • [SLOW TEST:104.222 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3638,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:41:39.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:41:39.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 11 16:41:40.957: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T16:41:40Z generation:1 name:name1 resourceVersion:15286490 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cb9659aa-e0f7-4397-9c61-8400868dce32] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 11 16:41:51.034: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T16:41:50Z generation:1 name:name2 resourceVersion:15286538 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d47890df-68a3-47f8-9414-0b5ccbf4ab26] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 11 16:42:01.038: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T16:41:40Z generation:2 name:name1 resourceVersion:15286568 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cb9659aa-e0f7-4397-9c61-8400868dce32] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 11 16:42:11.095: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T16:41:50Z generation:2 name:name2 resourceVersion:15286596 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d47890df-68a3-47f8-9414-0b5ccbf4ab26] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 11 16:42:21.126: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T16:41:40Z generation:2 name:name1 resourceVersion:15286624 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cb9659aa-e0f7-4397-9c61-8400868dce32] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 11 16:42:31.140: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T16:41:50Z generation:2 name:name2 resourceVersion:15286654 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d47890df-68a3-47f8-9414-0b5ccbf4ab26] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:42:41.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-6211" for this suite. • [SLOW TEST:62.259 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":235,"skipped":3639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:42:41.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 16:42:41.869: INFO: Waiting up to 5m0s for pod "pod-739fc47e-029d-470d-803c-e81bb4ffc4e3" in namespace "emptydir-2805" to be "success or failure" May 11 16:42:41.879: INFO: Pod "pod-739fc47e-029d-470d-803c-e81bb4ffc4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.293213ms May 11 16:42:43.885: INFO: Pod "pod-739fc47e-029d-470d-803c-e81bb4ffc4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016142736s May 11 16:42:46.113: INFO: Pod "pod-739fc47e-029d-470d-803c-e81bb4ffc4e3": Phase="Running", Reason="", readiness=true. Elapsed: 4.244166603s May 11 16:42:48.244: INFO: Pod "pod-739fc47e-029d-470d-803c-e81bb4ffc4e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.375319714s STEP: Saw pod success May 11 16:42:48.244: INFO: Pod "pod-739fc47e-029d-470d-803c-e81bb4ffc4e3" satisfied condition "success or failure" May 11 16:42:48.247: INFO: Trying to get logs from node jerma-worker2 pod pod-739fc47e-029d-470d-803c-e81bb4ffc4e3 container test-container: STEP: delete the pod May 11 16:42:48.593: INFO: Waiting for pod pod-739fc47e-029d-470d-803c-e81bb4ffc4e3 to disappear May 11 16:42:48.597: INFO: Pod pod-739fc47e-029d-470d-803c-e81bb4ffc4e3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:42:48.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2805" for this suite. • [SLOW TEST:6.861 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3668,"failed":0} [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:42:48.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:42:48.965: INFO: Waiting up to 5m0s for pod "downwardapi-volume-444dddc0-77ae-46d0-8fdc-34216727468f" in namespace "projected-7599" to be "success or failure" May 11 16:42:49.047: INFO: Pod "downwardapi-volume-444dddc0-77ae-46d0-8fdc-34216727468f": Phase="Pending", Reason="", readiness=false. Elapsed: 81.808254ms May 11 16:42:51.051: INFO: Pod "downwardapi-volume-444dddc0-77ae-46d0-8fdc-34216727468f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08602749s May 11 16:42:53.055: INFO: Pod "downwardapi-volume-444dddc0-77ae-46d0-8fdc-34216727468f": Phase="Running", Reason="", readiness=true. Elapsed: 4.089753255s May 11 16:42:55.059: INFO: Pod "downwardapi-volume-444dddc0-77ae-46d0-8fdc-34216727468f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093754996s STEP: Saw pod success May 11 16:42:55.059: INFO: Pod "downwardapi-volume-444dddc0-77ae-46d0-8fdc-34216727468f" satisfied condition "success or failure" May 11 16:42:55.061: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-444dddc0-77ae-46d0-8fdc-34216727468f container client-container: STEP: delete the pod May 11 16:42:55.090: INFO: Waiting for pod downwardapi-volume-444dddc0-77ae-46d0-8fdc-34216727468f to disappear May 11 16:42:55.095: INFO: Pod downwardapi-volume-444dddc0-77ae-46d0-8fdc-34216727468f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:42:55.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7599" for this suite. • [SLOW TEST:6.499 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3668,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:42:55.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:42:55.211: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:43:01.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3252" for this suite. • [SLOW TEST:6.185 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3687,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:43:01.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:43:02.112: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bacb950-6667-4c2b-950e-3dab5e31858e" in namespace "projected-6912" to be "success or failure" May 11 16:43:02.126: INFO: Pod "downwardapi-volume-9bacb950-6667-4c2b-950e-3dab5e31858e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.195364ms May 11 16:43:04.203: INFO: Pod "downwardapi-volume-9bacb950-6667-4c2b-950e-3dab5e31858e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090842678s May 11 16:43:06.269: INFO: Pod "downwardapi-volume-9bacb950-6667-4c2b-950e-3dab5e31858e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157479784s May 11 16:43:08.274: INFO: Pod "downwardapi-volume-9bacb950-6667-4c2b-950e-3dab5e31858e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.162146497s STEP: Saw pod success May 11 16:43:08.274: INFO: Pod "downwardapi-volume-9bacb950-6667-4c2b-950e-3dab5e31858e" satisfied condition "success or failure" May 11 16:43:08.276: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9bacb950-6667-4c2b-950e-3dab5e31858e container client-container: STEP: delete the pod May 11 16:43:08.391: INFO: Waiting for pod downwardapi-volume-9bacb950-6667-4c2b-950e-3dab5e31858e to disappear May 11 16:43:08.407: INFO: Pod downwardapi-volume-9bacb950-6667-4c2b-950e-3dab5e31858e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:43:08.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6912" for this suite. • [SLOW TEST:7.126 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:43:08.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-0494a7a4-d847-4dd2-a5d8-039f9aab9fb2 STEP: Creating secret with name secret-projected-all-test-volume-5cbe4721-b11c-4e44-8176-725a20243cf0 STEP: Creating a pod to test Check all projections for projected volume plugin May 11 16:43:08.996: INFO: Waiting up to 5m0s for pod "projected-volume-35afc564-0641-4e64-a125-2e947a5a9ccb" in namespace "projected-186" to be "success or failure" May 11 16:43:09.020: INFO: Pod "projected-volume-35afc564-0641-4e64-a125-2e947a5a9ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 23.545617ms May 11 16:43:11.024: INFO: Pod "projected-volume-35afc564-0641-4e64-a125-2e947a5a9ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027894521s May 11 16:43:13.028: INFO: Pod "projected-volume-35afc564-0641-4e64-a125-2e947a5a9ccb": Phase="Running", Reason="", readiness=true. Elapsed: 4.031555828s May 11 16:43:15.034: INFO: Pod "projected-volume-35afc564-0641-4e64-a125-2e947a5a9ccb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037585319s STEP: Saw pod success May 11 16:43:15.034: INFO: Pod "projected-volume-35afc564-0641-4e64-a125-2e947a5a9ccb" satisfied condition "success or failure" May 11 16:43:15.036: INFO: Trying to get logs from node jerma-worker pod projected-volume-35afc564-0641-4e64-a125-2e947a5a9ccb container projected-all-volume-test: STEP: delete the pod May 11 16:43:15.207: INFO: Waiting for pod projected-volume-35afc564-0641-4e64-a125-2e947a5a9ccb to disappear May 11 16:43:15.383: INFO: Pod projected-volume-35afc564-0641-4e64-a125-2e947a5a9ccb no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:43:15.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-186" for this suite. • [SLOW TEST:7.066 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3736,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:43:15.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 11 16:43:16.470: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:43:29.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1353" for this suite. • [SLOW TEST:14.011 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:43:29.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:43:29.562: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:43:37.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4069" for this suite. • [SLOW TEST:8.193 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3775,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:43:37.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:43:49.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3682" for this suite. • [SLOW TEST:11.387 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":243,"skipped":3830,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:43:49.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 11 16:43:50.939: INFO: Waiting up to 5m0s for pod "pod-6e9ecd56-26b1-41ab-aa18-ec26716efe6a" in namespace "emptydir-3984" to be "success or failure" May 11 16:43:51.474: INFO: Pod "pod-6e9ecd56-26b1-41ab-aa18-ec26716efe6a": Phase="Pending", Reason="", readiness=false. Elapsed: 534.762334ms May 11 16:43:53.604: INFO: Pod "pod-6e9ecd56-26b1-41ab-aa18-ec26716efe6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.66545122s May 11 16:43:55.955: INFO: Pod "pod-6e9ecd56-26b1-41ab-aa18-ec26716efe6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.016057112s STEP: Saw pod success May 11 16:43:55.955: INFO: Pod "pod-6e9ecd56-26b1-41ab-aa18-ec26716efe6a" satisfied condition "success or failure" May 11 16:43:56.348: INFO: Trying to get logs from node jerma-worker2 pod pod-6e9ecd56-26b1-41ab-aa18-ec26716efe6a container test-container: STEP: delete the pod May 11 16:43:56.937: INFO: Waiting for pod pod-6e9ecd56-26b1-41ab-aa18-ec26716efe6a to disappear May 11 16:43:56.955: INFO: Pod pod-6e9ecd56-26b1-41ab-aa18-ec26716efe6a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:43:56.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3984" for this suite. • [SLOW TEST:7.888 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3842,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:43:56.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:43:57.262: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4548fc2-f8be-4e1d-acc0-0b22858da883" in namespace "projected-1406" to be "success or failure" May 11 16:43:57.290: INFO: Pod "downwardapi-volume-b4548fc2-f8be-4e1d-acc0-0b22858da883": Phase="Pending", Reason="", readiness=false. Elapsed: 27.761914ms May 11 16:43:59.343: INFO: Pod "downwardapi-volume-b4548fc2-f8be-4e1d-acc0-0b22858da883": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080814769s May 11 16:44:01.738: INFO: Pod "downwardapi-volume-b4548fc2-f8be-4e1d-acc0-0b22858da883": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476276877s May 11 16:44:04.060: INFO: Pod "downwardapi-volume-b4548fc2-f8be-4e1d-acc0-0b22858da883": Phase="Pending", Reason="", readiness=false. Elapsed: 6.798551874s May 11 16:44:06.064: INFO: Pod "downwardapi-volume-b4548fc2-f8be-4e1d-acc0-0b22858da883": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.802344609s STEP: Saw pod success May 11 16:44:06.064: INFO: Pod "downwardapi-volume-b4548fc2-f8be-4e1d-acc0-0b22858da883" satisfied condition "success or failure" May 11 16:44:06.067: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b4548fc2-f8be-4e1d-acc0-0b22858da883 container client-container: STEP: delete the pod May 11 16:44:06.134: INFO: Waiting for pod downwardapi-volume-b4548fc2-f8be-4e1d-acc0-0b22858da883 to disappear May 11 16:44:06.169: INFO: Pod downwardapi-volume-b4548fc2-f8be-4e1d-acc0-0b22858da883 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:44:06.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1406" for this suite. • [SLOW TEST:9.216 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3857,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:44:06.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-4af1e9ea-9a8c-4c2b-baa7-d3a6b8e4fc82 STEP: Creating a pod to test consume configMaps May 11 16:44:06.265: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae609922-c2d6-41cf-b2ff-d43d3b50f521" in namespace "projected-2741" to be "success or failure" May 11 16:44:06.313: INFO: Pod "pod-projected-configmaps-ae609922-c2d6-41cf-b2ff-d43d3b50f521": Phase="Pending", Reason="", readiness=false. Elapsed: 48.33648ms May 11 16:44:08.408: INFO: Pod "pod-projected-configmaps-ae609922-c2d6-41cf-b2ff-d43d3b50f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142497339s May 11 16:44:10.411: INFO: Pod "pod-projected-configmaps-ae609922-c2d6-41cf-b2ff-d43d3b50f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145879735s May 11 16:44:12.413: INFO: Pod "pod-projected-configmaps-ae609922-c2d6-41cf-b2ff-d43d3b50f521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.148337007s STEP: Saw pod success May 11 16:44:12.414: INFO: Pod "pod-projected-configmaps-ae609922-c2d6-41cf-b2ff-d43d3b50f521" satisfied condition "success or failure" May 11 16:44:12.415: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-ae609922-c2d6-41cf-b2ff-d43d3b50f521 container projected-configmap-volume-test: STEP: delete the pod May 11 16:44:12.439: INFO: Waiting for pod pod-projected-configmaps-ae609922-c2d6-41cf-b2ff-d43d3b50f521 to disappear May 11 16:44:12.461: INFO: Pod pod-projected-configmaps-ae609922-c2d6-41cf-b2ff-d43d3b50f521 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:44:12.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2741" for this suite. • [SLOW TEST:6.288 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3903,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:44:12.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-67eacda9-ac0e-4cec-8f3d-263e818431be STEP: Creating a pod to test consume secrets May 11 16:44:12.888: INFO: Waiting up to 5m0s for pod "pod-secrets-8615eef5-b564-4986-a0c9-730a1ce19771" in namespace "secrets-4332" to be "success or failure" May 11 16:44:13.277: INFO: Pod "pod-secrets-8615eef5-b564-4986-a0c9-730a1ce19771": Phase="Pending", Reason="", readiness=false. Elapsed: 388.506208ms May 11 16:44:15.280: INFO: Pod "pod-secrets-8615eef5-b564-4986-a0c9-730a1ce19771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391537867s May 11 16:44:17.535: INFO: Pod "pod-secrets-8615eef5-b564-4986-a0c9-730a1ce19771": Phase="Pending", Reason="", readiness=false. Elapsed: 4.646580771s May 11 16:44:19.546: INFO: Pod "pod-secrets-8615eef5-b564-4986-a0c9-730a1ce19771": Phase="Pending", Reason="", readiness=false. Elapsed: 6.657246172s May 11 16:44:21.550: INFO: Pod "pod-secrets-8615eef5-b564-4986-a0c9-730a1ce19771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.661375094s STEP: Saw pod success May 11 16:44:21.550: INFO: Pod "pod-secrets-8615eef5-b564-4986-a0c9-730a1ce19771" satisfied condition "success or failure" May 11 16:44:21.553: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-8615eef5-b564-4986-a0c9-730a1ce19771 container secret-volume-test: STEP: delete the pod May 11 16:44:21.572: INFO: Waiting for pod pod-secrets-8615eef5-b564-4986-a0c9-730a1ce19771 to disappear May 11 16:44:21.602: INFO: Pod pod-secrets-8615eef5-b564-4986-a0c9-730a1ce19771 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:44:21.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4332" for this suite. • [SLOW TEST:9.140 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":3904,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:44:21.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 11 16:44:21.706: INFO: Created pod &Pod{ObjectMeta:{dns-1329 dns-1329 /api/v1/namespaces/dns-1329/pods/dns-1329 e1d7012d-c841-44ef-bff9-d87f164352e2 15287254 0 2020-05-11 16:44:21 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q98ws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q98ws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q98ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 11 16:44:27.724: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1329 PodName:dns-1329 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:44:27.724: INFO: >>> kubeConfig: /root/.kube/config I0511 16:44:27.758403 6 log.go:172] (0xc0029fc000) (0xc001baa820) Create stream I0511 16:44:27.758428 6 log.go:172] (0xc0029fc000) (0xc001baa820) Stream added, broadcasting: 1 I0511 16:44:27.759778 6 log.go:172] (0xc0029fc000) Reply frame received for 1 I0511 16:44:27.759815 6 log.go:172] (0xc0029fc000) (0xc001f22460) Create stream I0511 16:44:27.759823 6 log.go:172] (0xc0029fc000) (0xc001f22460) Stream added, broadcasting: 3 I0511 16:44:27.760763 6 log.go:172] (0xc0029fc000) Reply frame received for 3 I0511 16:44:27.760792 6 log.go:172] (0xc0029fc000) (0xc00240a640) Create stream I0511 16:44:27.760805 6 log.go:172] (0xc0029fc000) (0xc00240a640) Stream added, broadcasting: 5 I0511 16:44:27.761857 6 log.go:172] (0xc0029fc000) Reply frame received for 5 I0511 16:44:27.822647 6 log.go:172] (0xc0029fc000) Data frame received for 3 I0511 16:44:27.822669 6 log.go:172] (0xc001f22460) (3) Data frame handling I0511 16:44:27.822691 6 log.go:172] (0xc001f22460) (3) Data frame sent I0511 16:44:27.823402 6 log.go:172] (0xc0029fc000) Data frame received for 5 I0511 16:44:27.823423 6 log.go:172] (0xc00240a640) (5) Data frame handling I0511 16:44:27.823663 6 log.go:172] (0xc0029fc000) Data frame received for 3 I0511 16:44:27.823678 6 log.go:172] (0xc001f22460) (3) Data frame handling I0511 16:44:27.825465 6 log.go:172] (0xc0029fc000) Data frame received for 1 I0511 16:44:27.825489 6 log.go:172] (0xc001baa820) (1) Data frame handling I0511 16:44:27.825524 6 log.go:172] (0xc001baa820) (1) Data frame sent I0511 16:44:27.825544 6 log.go:172] (0xc0029fc000) (0xc001baa820) Stream removed, broadcasting: 1 I0511 16:44:27.825627 6 log.go:172] (0xc0029fc000) (0xc001baa820) Stream removed, broadcasting: 1 I0511 16:44:27.825641 6 log.go:172] (0xc0029fc000) (0xc001f22460) Stream removed, broadcasting: 3 I0511 16:44:27.825748 6 log.go:172] (0xc0029fc000) (0xc00240a640) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 11 16:44:27.825: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1329 PodName:dns-1329 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:44:27.825: INFO: >>> kubeConfig: /root/.kube/config I0511 16:44:27.856390 6 log.go:172] (0xc00278ab00) (0xc00240aa00) Create stream I0511 16:44:27.856415 6 log.go:172] (0xc00278ab00) (0xc00240aa00) Stream added, broadcasting: 1 I0511 16:44:27.858556 6 log.go:172] (0xc00278ab00) Reply frame received for 1 I0511 16:44:27.858582 6 log.go:172] (0xc00278ab00) (0xc00240aaa0) Create stream I0511 16:44:27.858599 6 log.go:172] (0xc00278ab00) (0xc00240aaa0) Stream added, broadcasting: 3 I0511 16:44:27.859403 6 log.go:172] (0xc00278ab00) Reply frame received for 3 I0511 16:44:27.859434 6 log.go:172] (0xc00278ab00) (0xc001baac80) Create stream I0511 16:44:27.859445 6 log.go:172] (0xc00278ab00) (0xc001baac80) Stream added, broadcasting: 5 I0511 16:44:27.860153 6 log.go:172] (0xc00278ab00) Reply frame received for 5 I0511 16:44:27.941229 6 log.go:172] (0xc00278ab00) Data frame received for 3 I0511 16:44:27.941252 6 log.go:172] (0xc00240aaa0) (3) Data frame handling I0511 16:44:27.941263 6 log.go:172] (0xc00240aaa0) (3) Data frame sent I0511 16:44:27.942086 6 log.go:172] (0xc00278ab00) Data frame received for 3 I0511 16:44:27.942120 6 log.go:172] (0xc00240aaa0) (3) Data frame handling I0511 16:44:27.942147 6 log.go:172] (0xc00278ab00) Data frame received for 5 I0511 16:44:27.942162 6 log.go:172] (0xc001baac80) (5) Data frame handling I0511 16:44:27.943468 6 log.go:172] (0xc00278ab00) Data frame received for 1 I0511 16:44:27.943478 6 log.go:172] (0xc00240aa00) (1) Data frame handling I0511 16:44:27.943484 6 log.go:172] (0xc00240aa00) (1) Data frame sent I0511 16:44:27.943490 6 log.go:172] (0xc00278ab00) (0xc00240aa00) Stream removed, broadcasting: 1 I0511 16:44:27.943500 6 log.go:172] (0xc00278ab00) Go away received I0511 16:44:27.943605 6 log.go:172] (0xc00278ab00) (0xc00240aa00) Stream removed, broadcasting: 1 I0511 16:44:27.943630 6 log.go:172] (0xc00278ab00) (0xc00240aaa0) Stream removed, broadcasting: 3 I0511 16:44:27.943644 6 log.go:172] (0xc00278ab00) (0xc001baac80) Stream removed, broadcasting: 5 May 11 16:44:27.943: INFO: Deleting pod dns-1329... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:44:27.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1329" for this suite. • [SLOW TEST:6.358 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":248,"skipped":3915,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:44:27.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:44:28.046: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:44:29.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8934" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":249,"skipped":3937,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:44:29.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-bafa6e02-b5cd-4788-9c60-67e56d98f073 STEP: Creating a pod to test consume secrets May 11 16:44:30.356: INFO: Waiting up to 5m0s for pod "pod-secrets-c778419c-46c5-4e6b-9f04-c17e07dbc149" in namespace "secrets-9457" to be "success or failure" May 11 16:44:30.374: INFO: Pod "pod-secrets-c778419c-46c5-4e6b-9f04-c17e07dbc149": Phase="Pending", Reason="", readiness=false. Elapsed: 17.862184ms May 11 16:44:32.378: INFO: Pod "pod-secrets-c778419c-46c5-4e6b-9f04-c17e07dbc149": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021979401s May 11 16:44:34.381: INFO: Pod "pod-secrets-c778419c-46c5-4e6b-9f04-c17e07dbc149": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025302944s May 11 16:44:36.882: INFO: Pod "pod-secrets-c778419c-46c5-4e6b-9f04-c17e07dbc149": Phase="Running", Reason="", readiness=true. Elapsed: 6.525959568s May 11 16:44:38.885: INFO: Pod "pod-secrets-c778419c-46c5-4e6b-9f04-c17e07dbc149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.529050252s STEP: Saw pod success May 11 16:44:38.885: INFO: Pod "pod-secrets-c778419c-46c5-4e6b-9f04-c17e07dbc149" satisfied condition "success or failure" May 11 16:44:38.887: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c778419c-46c5-4e6b-9f04-c17e07dbc149 container secret-env-test: STEP: delete the pod May 11 16:44:39.057: INFO: Waiting for pod pod-secrets-c778419c-46c5-4e6b-9f04-c17e07dbc149 to disappear May 11 16:44:39.222: INFO: Pod pod-secrets-c778419c-46c5-4e6b-9f04-c17e07dbc149 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:44:39.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9457" for this suite. • [SLOW TEST:9.460 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":3951,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:44:39.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-3bf01f0f-147a-4a93-bd72-5e41be4182ce STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:44:47.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5807" for this suite. • [SLOW TEST:8.638 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":3982,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:44:47.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:44:48.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8707" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":252,"skipped":3996,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:44:48.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:44:48.712: INFO: Creating ReplicaSet my-hostname-basic-d9a81e86-2627-49f6-8495-85cb9927ba2d May 11 16:44:48.752: INFO: Pod name my-hostname-basic-d9a81e86-2627-49f6-8495-85cb9927ba2d: Found 0 pods out of 1 May 11 16:44:53.763: INFO: Pod name my-hostname-basic-d9a81e86-2627-49f6-8495-85cb9927ba2d: Found 1 pods out of 1 May 11 16:44:53.763: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d9a81e86-2627-49f6-8495-85cb9927ba2d" is running May 11 16:44:53.766: INFO: Pod "my-hostname-basic-d9a81e86-2627-49f6-8495-85cb9927ba2d-tv7qs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 16:44:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 16:44:53 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 16:44:53 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 16:44:48 +0000 UTC Reason: Message:}]) May 11 16:44:53.766: INFO: Trying to dial the pod May 11 16:44:58.787: INFO: Controller my-hostname-basic-d9a81e86-2627-49f6-8495-85cb9927ba2d: Got expected result from replica 1 [my-hostname-basic-d9a81e86-2627-49f6-8495-85cb9927ba2d-tv7qs]: "my-hostname-basic-d9a81e86-2627-49f6-8495-85cb9927ba2d-tv7qs", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:44:58.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-855" for this suite. • [SLOW TEST:10.168 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":253,"skipped":3999,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:44:58.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-da356182-97d3-42ea-b139-c509d50ce948 STEP: Creating a pod to test consume configMaps May 11 16:44:59.396: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7d04444d-22ac-4a03-a6e2-4dfde1935714" in namespace "projected-9755" to be "success or failure" May 11 16:44:59.469: INFO: Pod "pod-projected-configmaps-7d04444d-22ac-4a03-a6e2-4dfde1935714": Phase="Pending", Reason="", readiness=false. Elapsed: 73.385405ms May 11 16:45:01.494: INFO: Pod "pod-projected-configmaps-7d04444d-22ac-4a03-a6e2-4dfde1935714": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098175807s May 11 16:45:03.629: INFO: Pod "pod-projected-configmaps-7d04444d-22ac-4a03-a6e2-4dfde1935714": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23393525s May 11 16:45:05.635: INFO: Pod "pod-projected-configmaps-7d04444d-22ac-4a03-a6e2-4dfde1935714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.239007982s STEP: Saw pod success May 11 16:45:05.635: INFO: Pod "pod-projected-configmaps-7d04444d-22ac-4a03-a6e2-4dfde1935714" satisfied condition "success or failure" May 11 16:45:05.637: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-7d04444d-22ac-4a03-a6e2-4dfde1935714 container projected-configmap-volume-test: STEP: delete the pod May 11 16:45:05.897: INFO: Waiting for pod pod-projected-configmaps-7d04444d-22ac-4a03-a6e2-4dfde1935714 to disappear May 11 16:45:06.097: INFO: Pod pod-projected-configmaps-7d04444d-22ac-4a03-a6e2-4dfde1935714 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:45:06.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9755" for this suite. • [SLOW TEST:7.311 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4031,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:45:06.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-3981 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3981 STEP: Deleting pre-stop pod May 11 16:45:21.526: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:45:21.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3981" for this suite. • [SLOW TEST:15.507 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":255,"skipped":4045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:45:21.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 16:45:21.689: INFO: Waiting up to 5m0s for pod "pod-27417cd3-bfda-4b0d-8af6-406f44503dc1" in namespace "emptydir-8120" to be "success or failure" May 11 16:45:21.693: INFO: Pod "pod-27417cd3-bfda-4b0d-8af6-406f44503dc1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.832505ms May 11 16:45:23.828: INFO: Pod "pod-27417cd3-bfda-4b0d-8af6-406f44503dc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138612505s May 11 16:45:26.613: INFO: Pod "pod-27417cd3-bfda-4b0d-8af6-406f44503dc1": Phase="Running", Reason="", readiness=true. Elapsed: 4.923945798s May 11 16:45:28.618: INFO: Pod "pod-27417cd3-bfda-4b0d-8af6-406f44503dc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.92833956s STEP: Saw pod success May 11 16:45:28.618: INFO: Pod "pod-27417cd3-bfda-4b0d-8af6-406f44503dc1" satisfied condition "success or failure" May 11 16:45:28.621: INFO: Trying to get logs from node jerma-worker pod pod-27417cd3-bfda-4b0d-8af6-406f44503dc1 container test-container: STEP: delete the pod May 11 16:45:28.701: INFO: Waiting for pod pod-27417cd3-bfda-4b0d-8af6-406f44503dc1 to disappear May 11 16:45:28.716: INFO: Pod pod-27417cd3-bfda-4b0d-8af6-406f44503dc1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:45:28.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8120" for this suite. • [SLOW TEST:7.196 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:45:28.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 16:45:32.247: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 16:45:34.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812332, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812332, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812333, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812331, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:45:36.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812332, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812332, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812333, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812331, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 16:45:39.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812332, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812332, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812333, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812331, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 16:45:42.281: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:45:42.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5636" for this suite. STEP: Destroying namespace "webhook-5636-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.177 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":257,"skipped":4171,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:45:42.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 11 16:45:50.054: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 11 16:45:55.219: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:45:55.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-818" for this suite. • [SLOW TEST:12.237 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":258,"skipped":4193,"failed":0} [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:45:55.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-709/secret-test-e7f9b191-50b2-407c-a55a-03056c259fda STEP: Creating a pod to test consume secrets May 11 16:45:55.381: INFO: Waiting up to 5m0s for pod "pod-configmaps-e2c431b5-2d82-4816-91ac-4311f9a93645" in namespace "secrets-709" to be "success or failure" May 11 16:45:55.389: INFO: Pod "pod-configmaps-e2c431b5-2d82-4816-91ac-4311f9a93645": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194637ms May 11 16:45:57.624: INFO: Pod "pod-configmaps-e2c431b5-2d82-4816-91ac-4311f9a93645": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243280557s May 11 16:45:59.751: INFO: Pod "pod-configmaps-e2c431b5-2d82-4816-91ac-4311f9a93645": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370085664s May 11 16:46:01.755: INFO: Pod "pod-configmaps-e2c431b5-2d82-4816-91ac-4311f9a93645": Phase="Running", Reason="", readiness=true. Elapsed: 6.374126013s May 11 16:46:03.811: INFO: Pod "pod-configmaps-e2c431b5-2d82-4816-91ac-4311f9a93645": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.430352406s STEP: Saw pod success May 11 16:46:03.811: INFO: Pod "pod-configmaps-e2c431b5-2d82-4816-91ac-4311f9a93645" satisfied condition "success or failure" May 11 16:46:03.815: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e2c431b5-2d82-4816-91ac-4311f9a93645 container env-test: STEP: delete the pod May 11 16:46:03.881: INFO: Waiting for pod pod-configmaps-e2c431b5-2d82-4816-91ac-4311f9a93645 to disappear May 11 16:46:04.086: INFO: Pod pod-configmaps-e2c431b5-2d82-4816-91ac-4311f9a93645 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:46:04.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-709" for this suite. • [SLOW TEST:8.868 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4193,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:46:04.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:46:05.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-080acd42-3138-4b6f-b1e5-103117c1efe0" in namespace "downward-api-1671" to be "success or failure" May 11 16:46:05.503: INFO: Pod "downwardapi-volume-080acd42-3138-4b6f-b1e5-103117c1efe0": Phase="Pending", Reason="", readiness=false. Elapsed: 281.635943ms May 11 16:46:07.506: INFO: Pod "downwardapi-volume-080acd42-3138-4b6f-b1e5-103117c1efe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285582418s May 11 16:46:09.538: INFO: Pod "downwardapi-volume-080acd42-3138-4b6f-b1e5-103117c1efe0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317014564s May 11 16:46:11.956: INFO: Pod "downwardapi-volume-080acd42-3138-4b6f-b1e5-103117c1efe0": Phase="Running", Reason="", readiness=true. Elapsed: 6.735320516s May 11 16:46:13.960: INFO: Pod "downwardapi-volume-080acd42-3138-4b6f-b1e5-103117c1efe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.739020867s STEP: Saw pod success May 11 16:46:13.960: INFO: Pod "downwardapi-volume-080acd42-3138-4b6f-b1e5-103117c1efe0" satisfied condition "success or failure" May 11 16:46:13.963: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-080acd42-3138-4b6f-b1e5-103117c1efe0 container client-container: STEP: delete the pod May 11 16:46:13.996: INFO: Waiting for pod downwardapi-volume-080acd42-3138-4b6f-b1e5-103117c1efe0 to disappear May 11 16:46:14.026: INFO: Pod downwardapi-volume-080acd42-3138-4b6f-b1e5-103117c1efe0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:46:14.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1671" for this suite. • [SLOW TEST:9.938 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:46:14.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:46:14.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9693" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":261,"skipped":4256,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:46:14.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:46:14.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1804" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":262,"skipped":4273,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:46:14.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:46:15.503: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 16:46:18.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2606 create -f -' May 11 16:46:20.818: INFO: stderr: "" May 11 16:46:20.818: INFO: stdout: "e2e-test-crd-publish-openapi-5992-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 11 16:46:20.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2606 delete e2e-test-crd-publish-openapi-5992-crds test-cr' May 11 16:46:21.222: INFO: stderr: "" May 11 16:46:21.222: INFO: stdout: "e2e-test-crd-publish-openapi-5992-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 11 16:46:21.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2606 apply -f -' May 11 16:46:22.352: INFO: stderr: "" May 11 16:46:22.352: INFO: stdout: "e2e-test-crd-publish-openapi-5992-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 11 16:46:22.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2606 delete e2e-test-crd-publish-openapi-5992-crds test-cr' May 11 16:46:23.241: INFO: stderr: "" May 11 16:46:23.241: INFO: stdout: "e2e-test-crd-publish-openapi-5992-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 11 16:46:23.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5992-crds' May 11 16:46:23.747: INFO: stderr: "" May 11 16:46:23.747: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5992-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:46:27.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2606" for this suite. • [SLOW TEST:12.726 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":263,"skipped":4284,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:46:27.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 11 16:46:35.274: INFO: 8 pods remaining May 11 16:46:35.274: INFO: 0 pods has nil DeletionTimestamp May 11 16:46:35.274: INFO: May 11 16:46:36.469: INFO: 0 pods remaining May 11 16:46:36.469: INFO: 0 pods has nil DeletionTimestamp May 11 16:46:36.469: INFO: May 11 16:46:38.248: INFO: 0 pods remaining May 11 16:46:38.248: INFO: 0 pods has nil DeletionTimestamp May 11 16:46:38.248: INFO: STEP: Gathering metrics W0511 16:46:39.740182 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 16:46:39.740: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:46:39.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-848" for this suite. • [SLOW TEST:13.489 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":264,"skipped":4294,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:46:40.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:46:42.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b5a1ea4-0e4c-4a16-bd67-d1b049d56c74" in namespace "downward-api-7287" to be "success or failure" May 11 16:46:43.082: INFO: Pod "downwardapi-volume-1b5a1ea4-0e4c-4a16-bd67-d1b049d56c74": Phase="Pending", Reason="", readiness=false. Elapsed: 366.034745ms May 11 16:46:45.170: INFO: Pod "downwardapi-volume-1b5a1ea4-0e4c-4a16-bd67-d1b049d56c74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453729853s May 11 16:46:47.218: INFO: Pod "downwardapi-volume-1b5a1ea4-0e4c-4a16-bd67-d1b049d56c74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.501904257s May 11 16:46:49.227: INFO: Pod "downwardapi-volume-1b5a1ea4-0e4c-4a16-bd67-d1b049d56c74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.511532766s STEP: Saw pod success May 11 16:46:49.227: INFO: Pod "downwardapi-volume-1b5a1ea4-0e4c-4a16-bd67-d1b049d56c74" satisfied condition "success or failure" May 11 16:46:49.250: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1b5a1ea4-0e4c-4a16-bd67-d1b049d56c74 container client-container: STEP: delete the pod May 11 16:46:49.539: INFO: Waiting for pod downwardapi-volume-1b5a1ea4-0e4c-4a16-bd67-d1b049d56c74 to disappear May 11 16:46:49.637: INFO: Pod downwardapi-volume-1b5a1ea4-0e4c-4a16-bd67-d1b049d56c74 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:46:49.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7287" for this suite. • [SLOW TEST:9.010 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4312,"failed":0} SSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:46:49.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:46:50.775: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6d5db457-a008-4439-bc85-ec72927dacfd" in namespace "security-context-test-919" to be "success or failure" May 11 16:46:50.973: INFO: Pod "alpine-nnp-false-6d5db457-a008-4439-bc85-ec72927dacfd": Phase="Pending", Reason="", readiness=false. Elapsed: 197.927655ms May 11 16:46:52.977: INFO: Pod "alpine-nnp-false-6d5db457-a008-4439-bc85-ec72927dacfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201961363s May 11 16:46:55.069: INFO: Pod "alpine-nnp-false-6d5db457-a008-4439-bc85-ec72927dacfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293593711s May 11 16:46:57.073: INFO: Pod "alpine-nnp-false-6d5db457-a008-4439-bc85-ec72927dacfd": Phase="Running", Reason="", readiness=true. Elapsed: 6.297165829s May 11 16:46:59.077: INFO: Pod "alpine-nnp-false-6d5db457-a008-4439-bc85-ec72927dacfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.301431136s May 11 16:46:59.077: INFO: Pod "alpine-nnp-false-6d5db457-a008-4439-bc85-ec72927dacfd" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:46:59.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-919" for this suite. • [SLOW TEST:9.368 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4316,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:46:59.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4989 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4989 STEP: creating replication controller externalsvc in namespace services-4989 I0511 16:46:59.716362 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4989, replica count: 2 I0511 16:47:02.766799 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:47:05.767024 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 16:47:08.767198 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 11 16:47:08.975: INFO: Creating new exec pod May 11 16:47:17.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4989 execpodlhnt5 -- /bin/sh -x -c nslookup nodeport-service' May 11 16:47:18.160: INFO: stderr: "I0511 16:47:18.074248 3578 log.go:172] (0xc000a240b0) (0xc00075b5e0) Create stream\nI0511 16:47:18.074319 3578 log.go:172] (0xc000a240b0) (0xc00075b5e0) Stream added, broadcasting: 1\nI0511 16:47:18.076819 3578 log.go:172] (0xc000a240b0) Reply frame received for 1\nI0511 16:47:18.076860 3578 log.go:172] (0xc000a240b0) (0xc000a36000) Create stream\nI0511 16:47:18.076872 3578 log.go:172] (0xc000a240b0) (0xc000a36000) Stream added, broadcasting: 3\nI0511 16:47:18.078031 3578 log.go:172] (0xc000a240b0) Reply frame received for 3\nI0511 16:47:18.078083 3578 log.go:172] (0xc000a240b0) (0xc0006f7b80) Create stream\nI0511 16:47:18.078098 3578 log.go:172] (0xc000a240b0) (0xc0006f7b80) Stream added, broadcasting: 5\nI0511 16:47:18.079300 3578 log.go:172] (0xc000a240b0) Reply frame received for 5\nI0511 16:47:18.142188 3578 log.go:172] (0xc000a240b0) Data frame received for 5\nI0511 16:47:18.142223 3578 log.go:172] (0xc0006f7b80) (5) Data frame handling\nI0511 16:47:18.142245 3578 log.go:172] (0xc0006f7b80) (5) Data frame sent\n+ nslookup nodeport-service\nI0511 16:47:18.151969 3578 log.go:172] (0xc000a240b0) Data frame received for 3\nI0511 16:47:18.152000 3578 log.go:172] (0xc000a36000) (3) Data frame handling\nI0511 16:47:18.152029 3578 log.go:172] (0xc000a36000) (3) Data frame sent\nI0511 16:47:18.152697 3578 log.go:172] (0xc000a240b0) Data frame received for 3\nI0511 16:47:18.152714 3578 log.go:172] (0xc000a36000) (3) Data frame handling\nI0511 16:47:18.152738 3578 log.go:172] (0xc000a36000) (3) Data frame sent\nI0511 16:47:18.153535 3578 log.go:172] (0xc000a240b0) Data frame received for 3\nI0511 16:47:18.153553 3578 log.go:172] (0xc000a36000) (3) Data frame handling\nI0511 16:47:18.153583 3578 log.go:172] (0xc000a240b0) Data frame received for 5\nI0511 16:47:18.153608 3578 log.go:172] (0xc0006f7b80) (5) Data frame handling\nI0511 16:47:18.155328 3578 log.go:172] (0xc000a240b0) Data frame received for 1\nI0511 16:47:18.155348 3578 log.go:172] (0xc00075b5e0) (1) Data frame handling\nI0511 16:47:18.155359 3578 log.go:172] (0xc00075b5e0) (1) Data frame sent\nI0511 16:47:18.155465 3578 log.go:172] (0xc000a240b0) (0xc00075b5e0) Stream removed, broadcasting: 1\nI0511 16:47:18.155546 3578 log.go:172] (0xc000a240b0) Go away received\nI0511 16:47:18.155945 3578 log.go:172] (0xc000a240b0) (0xc00075b5e0) Stream removed, broadcasting: 1\nI0511 16:47:18.155965 3578 log.go:172] (0xc000a240b0) (0xc000a36000) Stream removed, broadcasting: 3\nI0511 16:47:18.155974 3578 log.go:172] (0xc000a240b0) (0xc0006f7b80) Stream removed, broadcasting: 5\n" May 11 16:47:18.160: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4989.svc.cluster.local\tcanonical name = externalsvc.services-4989.svc.cluster.local.\nName:\texternalsvc.services-4989.svc.cluster.local\nAddress: 10.100.183.101\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4989, will wait for the garbage collector to delete the pods May 11 16:47:18.250: INFO: Deleting ReplicationController externalsvc took: 5.144148ms May 11 16:47:19.050: INFO: Terminating ReplicationController externalsvc pods took: 800.27176ms May 11 16:47:29.840: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:47:29.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4989" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:30.870 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":267,"skipped":4328,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:47:29.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:47:30.439: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 16:47:32.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4454 create -f -' May 11 16:47:43.161: INFO: stderr: "" May 11 16:47:43.161: INFO: stdout: "e2e-test-crd-publish-openapi-3019-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 11 16:47:43.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4454 delete e2e-test-crd-publish-openapi-3019-crds test-cr' May 11 16:47:43.806: INFO: stderr: "" May 11 16:47:43.806: INFO: stdout: "e2e-test-crd-publish-openapi-3019-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 11 16:47:43.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4454 apply -f -' May 11 16:47:44.712: INFO: stderr: "" May 11 16:47:44.712: INFO: stdout: "e2e-test-crd-publish-openapi-3019-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 11 16:47:44.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4454 delete e2e-test-crd-publish-openapi-3019-crds test-cr' May 11 16:47:45.158: INFO: stderr: "" May 11 16:47:45.158: INFO: stdout: "e2e-test-crd-publish-openapi-3019-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 11 16:47:45.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3019-crds' May 11 16:47:45.999: INFO: stderr: "" May 11 16:47:45.999: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3019-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:47:49.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4454" for this suite. • [SLOW TEST:19.437 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":268,"skipped":4329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:47:49.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:48:49.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-811" for this suite. • [SLOW TEST:60.459 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4353,"failed":0} SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:48:49.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-72e7902a-e1ef-4d5a-bae7-5cd561cbe508 STEP: Creating secret with name s-test-opt-upd-9784d2ad-bcfe-4d62-bcc6-e3e677a56ade STEP: Creating the pod STEP: Deleting secret s-test-opt-del-72e7902a-e1ef-4d5a-bae7-5cd561cbe508 STEP: Updating secret s-test-opt-upd-9784d2ad-bcfe-4d62-bcc6-e3e677a56ade STEP: Creating secret with name s-test-opt-create-4d1beb09-f7d9-4963-a970-ff2e967ed6ac STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:50:18.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4604" for this suite. • [SLOW TEST:88.660 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:50:18.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 16:50:18.564: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 11 16:50:20.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4292 create -f -' May 11 16:50:24.955: INFO: stderr: "" May 11 16:50:24.955: INFO: stdout: "e2e-test-crd-publish-openapi-2190-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 11 16:50:24.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4292 delete e2e-test-crd-publish-openapi-2190-crds test-foo' May 11 16:50:25.204: INFO: stderr: "" May 11 16:50:25.204: INFO: stdout: "e2e-test-crd-publish-openapi-2190-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 11 16:50:25.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4292 apply -f -' May 11 16:50:25.529: INFO: stderr: "" May 11 16:50:25.529: INFO: stdout: "e2e-test-crd-publish-openapi-2190-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 11 16:50:25.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4292 delete e2e-test-crd-publish-openapi-2190-crds test-foo' May 11 16:50:25.778: INFO: stderr: "" May 11 16:50:25.779: INFO: stdout: "e2e-test-crd-publish-openapi-2190-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 11 16:50:25.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4292 create -f -' May 11 16:50:26.025: INFO: rc: 1 May 11 16:50:26.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4292 apply -f -' May 11 16:50:26.343: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 11 16:50:26.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4292 create -f -' May 11 16:50:26.611: INFO: rc: 1 May 11 16:50:26.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4292 apply -f -' May 11 16:50:26.868: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 11 16:50:26.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2190-crds' May 11 16:50:27.107: INFO: stderr: "" May 11 16:50:27.107: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2190-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 11 16:50:27.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2190-crds.metadata' May 11 16:50:27.368: INFO: stderr: "" May 11 16:50:27.368: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2190-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 11 16:50:27.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2190-crds.spec' May 11 16:50:27.616: INFO: stderr: "" May 11 16:50:27.616: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2190-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 11 16:50:27.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2190-crds.spec.bars' May 11 16:50:27.894: INFO: stderr: "" May 11 16:50:27.894: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2190-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 11 16:50:27.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2190-crds.spec.bars2' May 11 16:50:28.212: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:50:30.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4292" for this suite. • [SLOW TEST:11.575 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":271,"skipped":4425,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:50:30.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 16:50:30.763: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 16:50:32.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812630, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812630, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812630, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724812630, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 16:50:35.909: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:50:36.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5372" for this suite. STEP: Destroying namespace "webhook-5372-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.488 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":272,"skipped":4432,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:50:36.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1ee7d257-d735-494a-81dc-fe8b97c917ae STEP: Creating a pod to test consume secrets May 11 16:50:36.741: INFO: Waiting up to 5m0s for pod "pod-secrets-6d0e473b-8c85-452e-a89f-58117b2040f5" in namespace "secrets-828" to be "success or failure" May 11 16:50:36.759: INFO: Pod "pod-secrets-6d0e473b-8c85-452e-a89f-58117b2040f5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.028902ms May 11 16:50:38.764: INFO: Pod "pod-secrets-6d0e473b-8c85-452e-a89f-58117b2040f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023179087s May 11 16:50:40.769: INFO: Pod "pod-secrets-6d0e473b-8c85-452e-a89f-58117b2040f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027600378s STEP: Saw pod success May 11 16:50:40.769: INFO: Pod "pod-secrets-6d0e473b-8c85-452e-a89f-58117b2040f5" satisfied condition "success or failure" May 11 16:50:40.771: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-6d0e473b-8c85-452e-a89f-58117b2040f5 container secret-volume-test: STEP: delete the pod May 11 16:50:40.805: INFO: Waiting for pod pod-secrets-6d0e473b-8c85-452e-a89f-58117b2040f5 to disappear May 11 16:50:40.825: INFO: Pod pod-secrets-6d0e473b-8c85-452e-a89f-58117b2040f5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:50:40.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-828" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4435,"failed":0} SSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:50:40.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 11 16:51:01.010: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4641 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:51:01.010: INFO: >>> kubeConfig: /root/.kube/config I0511 16:51:01.040799 6 log.go:172] (0xc00561a370) (0xc00143f900) Create stream I0511 16:51:01.040829 6 log.go:172] (0xc00561a370) (0xc00143f900) Stream added, broadcasting: 1 I0511 16:51:01.042664 6 log.go:172] (0xc00561a370) Reply frame received for 1 I0511 16:51:01.042718 6 log.go:172] (0xc00561a370) (0xc001b12280) Create stream I0511 16:51:01.042734 6 log.go:172] (0xc00561a370) (0xc001b12280) Stream added, broadcasting: 3 I0511 16:51:01.043707 6 log.go:172] (0xc00561a370) Reply frame received for 3 I0511 16:51:01.043739 6 log.go:172] (0xc00561a370) (0xc000d2c5a0) Create stream I0511 16:51:01.043750 6 log.go:172] (0xc00561a370) (0xc000d2c5a0) Stream added, broadcasting: 5 I0511 16:51:01.044796 6 log.go:172] (0xc00561a370) Reply frame received for 5 I0511 16:51:01.104593 6 log.go:172] (0xc00561a370) Data frame received for 3 I0511 16:51:01.104631 6 log.go:172] (0xc001b12280) (3) Data frame handling I0511 16:51:01.104644 6 log.go:172] (0xc001b12280) (3) Data frame sent I0511 16:51:01.104664 6 log.go:172] (0xc00561a370) Data frame received for 3 I0511 16:51:01.104672 6 log.go:172] (0xc001b12280) (3) Data frame handling I0511 16:51:01.104691 6 log.go:172] (0xc00561a370) Data frame received for 5 I0511 16:51:01.104698 6 log.go:172] (0xc000d2c5a0) (5) Data frame handling I0511 16:51:01.106205 6 log.go:172] (0xc00561a370) Data frame received for 1 I0511 16:51:01.106227 6 log.go:172] (0xc00143f900) (1) Data frame handling I0511 16:51:01.106237 6 log.go:172] (0xc00143f900) (1) Data frame sent I0511 16:51:01.106246 6 log.go:172] (0xc00561a370) (0xc00143f900) Stream removed, broadcasting: 1 I0511 16:51:01.106301 6 log.go:172] (0xc00561a370) Go away received I0511 16:51:01.106319 6 log.go:172] (0xc00561a370) (0xc00143f900) Stream removed, broadcasting: 1 I0511 16:51:01.106393 6 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc001b12280), 0x5:(*spdystream.Stream)(0xc000d2c5a0)} I0511 16:51:01.106427 6 log.go:172] (0xc00561a370) (0xc001b12280) Stream removed, broadcasting: 3 I0511 16:51:01.106451 6 log.go:172] (0xc00561a370) (0xc000d2c5a0) Stream removed, broadcasting: 5 May 11 16:51:01.106: INFO: Exec stderr: "" May 11 16:51:01.106: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4641 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:51:01.106: INFO: >>> kubeConfig: /root/.kube/config I0511 16:51:01.136940 6 log.go:172] (0xc004fcc580) (0xc000d2ce60) Create stream I0511 16:51:01.136962 6 log.go:172] (0xc004fcc580) (0xc000d2ce60) Stream added, broadcasting: 1 I0511 16:51:01.139392 6 log.go:172] (0xc004fcc580) Reply frame received for 1 I0511 16:51:01.139460 6 log.go:172] (0xc004fcc580) (0xc00143fae0) Create stream I0511 16:51:01.139478 6 log.go:172] (0xc004fcc580) (0xc00143fae0) Stream added, broadcasting: 3 I0511 16:51:01.140665 6 log.go:172] (0xc004fcc580) Reply frame received for 3 I0511 16:51:01.140700 6 log.go:172] (0xc004fcc580) (0xc001b12320) Create stream I0511 16:51:01.140710 6 log.go:172] (0xc004fcc580) (0xc001b12320) Stream added, broadcasting: 5 I0511 16:51:01.141820 6 log.go:172] (0xc004fcc580) Reply frame received for 5 I0511 16:51:01.189337 6 log.go:172] (0xc004fcc580) Data frame received for 5 I0511 16:51:01.189357 6 log.go:172] (0xc001b12320) (5) Data frame handling I0511 16:51:01.189408 6 log.go:172] (0xc004fcc580) Data frame received for 3 I0511 16:51:01.189442 6 log.go:172] (0xc00143fae0) (3) Data frame handling I0511 16:51:01.189458 6 log.go:172] (0xc00143fae0) (3) Data frame sent I0511 16:51:01.189470 6 log.go:172] (0xc004fcc580) Data frame received for 3 I0511 16:51:01.189478 6 log.go:172] (0xc00143fae0) (3) Data frame handling I0511 16:51:01.190465 6 log.go:172] (0xc004fcc580) Data frame received for 1 I0511 16:51:01.190481 6 log.go:172] (0xc000d2ce60) (1) Data frame handling I0511 16:51:01.190501 6 log.go:172] (0xc000d2ce60) (1) Data frame sent I0511 16:51:01.190535 6 log.go:172] (0xc004fcc580) (0xc000d2ce60) Stream removed, broadcasting: 1 I0511 16:51:01.190636 6 log.go:172] (0xc004fcc580) Go away received I0511 16:51:01.190667 6 log.go:172] (0xc004fcc580) (0xc000d2ce60) Stream removed, broadcasting: 1 I0511 16:51:01.190686 6 log.go:172] (0xc004fcc580) (0xc00143fae0) Stream removed, broadcasting: 3 I0511 16:51:01.190699 6 log.go:172] (0xc004fcc580) (0xc001b12320) Stream removed, broadcasting: 5 May 11 16:51:01.190: INFO: Exec stderr: "" May 11 16:51:01.190: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4641 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:51:01.190: INFO: >>> kubeConfig: /root/.kube/config I0511 16:51:01.215332 6 log.go:172] (0xc00561a9a0) (0xc00143fc20) Create stream I0511 16:51:01.215361 6 log.go:172] (0xc00561a9a0) (0xc00143fc20) Stream added, broadcasting: 1 I0511 16:51:01.217795 6 log.go:172] (0xc00561a9a0) Reply frame received for 1 I0511 16:51:01.217882 6 log.go:172] (0xc00561a9a0) (0xc001b132c0) Create stream I0511 16:51:01.217907 6 log.go:172] (0xc00561a9a0) (0xc001b132c0) Stream added, broadcasting: 3 I0511 16:51:01.218931 6 log.go:172] (0xc00561a9a0) Reply frame received for 3 I0511 16:51:01.218957 6 log.go:172] (0xc00561a9a0) (0xc0027efa40) Create stream I0511 16:51:01.218971 6 log.go:172] (0xc00561a9a0) (0xc0027efa40) Stream added, broadcasting: 5 I0511 16:51:01.219808 6 log.go:172] (0xc00561a9a0) Reply frame received for 5 I0511 16:51:01.259425 6 log.go:172] (0xc00561a9a0) Data frame received for 5 I0511 16:51:01.259500 6 log.go:172] (0xc0027efa40) (5) Data frame handling I0511 16:51:01.259567 6 log.go:172] (0xc00561a9a0) Data frame received for 3 I0511 16:51:01.259606 6 log.go:172] (0xc001b132c0) (3) Data frame handling I0511 16:51:01.259647 6 log.go:172] (0xc001b132c0) (3) Data frame sent I0511 16:51:01.259685 6 log.go:172] (0xc00561a9a0) Data frame received for 3 I0511 16:51:01.259722 6 log.go:172] (0xc001b132c0) (3) Data frame handling I0511 16:51:01.260501 6 log.go:172] (0xc00561a9a0) Data frame received for 1 I0511 16:51:01.260525 6 log.go:172] (0xc00143fc20) (1) Data frame handling I0511 16:51:01.260545 6 log.go:172] (0xc00143fc20) (1) Data frame sent I0511 16:51:01.260572 6 log.go:172] (0xc00561a9a0) (0xc00143fc20) Stream removed, broadcasting: 1 I0511 16:51:01.260597 6 log.go:172] (0xc00561a9a0) Go away received I0511 16:51:01.260717 6 log.go:172] (0xc00561a9a0) (0xc00143fc20) Stream removed, broadcasting: 1 I0511 16:51:01.260746 6 log.go:172] (0xc00561a9a0) (0xc001b132c0) Stream removed, broadcasting: 3 I0511 16:51:01.260758 6 log.go:172] (0xc00561a9a0) (0xc0027efa40) Stream removed, broadcasting: 5 May 11 16:51:01.260: INFO: Exec stderr: "" May 11 16:51:01.260: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4641 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:51:01.260: INFO: >>> kubeConfig: /root/.kube/config I0511 16:51:01.286607 6 log.go:172] (0xc004fccbb0) (0xc000d2d5e0) Create stream I0511 16:51:01.286633 6 log.go:172] (0xc004fccbb0) (0xc000d2d5e0) Stream added, broadcasting: 1 I0511 16:51:01.288759 6 log.go:172] (0xc004fccbb0) Reply frame received for 1 I0511 16:51:01.288800 6 log.go:172] (0xc004fccbb0) (0xc00143fcc0) Create stream I0511 16:51:01.288817 6 log.go:172] (0xc004fccbb0) (0xc00143fcc0) Stream added, broadcasting: 3 I0511 16:51:01.290209 6 log.go:172] (0xc004fccbb0) Reply frame received for 3 I0511 16:51:01.290265 6 log.go:172] (0xc004fccbb0) (0xc0027efb80) Create stream I0511 16:51:01.290286 6 log.go:172] (0xc004fccbb0) (0xc0027efb80) Stream added, broadcasting: 5 I0511 16:51:01.291168 6 log.go:172] (0xc004fccbb0) Reply frame received for 5 I0511 16:51:01.353977 6 log.go:172] (0xc004fccbb0) Data frame received for 3 I0511 16:51:01.354031 6 log.go:172] (0xc00143fcc0) (3) Data frame handling I0511 16:51:01.354044 6 log.go:172] (0xc00143fcc0) (3) Data frame sent I0511 16:51:01.354059 6 log.go:172] (0xc004fccbb0) Data frame received for 3 I0511 16:51:01.354085 6 log.go:172] (0xc00143fcc0) (3) Data frame handling I0511 16:51:01.354106 6 log.go:172] (0xc004fccbb0) Data frame received for 5 I0511 16:51:01.354133 6 log.go:172] (0xc0027efb80) (5) Data frame handling I0511 16:51:01.355606 6 log.go:172] (0xc004fccbb0) Data frame received for 1 I0511 16:51:01.355659 6 log.go:172] (0xc000d2d5e0) (1) Data frame handling I0511 16:51:01.355681 6 log.go:172] (0xc000d2d5e0) (1) Data frame sent I0511 16:51:01.355696 6 log.go:172] (0xc004fccbb0) (0xc000d2d5e0) Stream removed, broadcasting: 1 I0511 16:51:01.355711 6 log.go:172] (0xc004fccbb0) Go away received I0511 16:51:01.355885 6 log.go:172] (0xc004fccbb0) (0xc000d2d5e0) Stream removed, broadcasting: 1 I0511 16:51:01.355972 6 log.go:172] (0xc004fccbb0) (0xc00143fcc0) Stream removed, broadcasting: 3 I0511 16:51:01.356046 6 log.go:172] (0xc004fccbb0) (0xc0027efb80) Stream removed, broadcasting: 5 May 11 16:51:01.356: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 11 16:51:01.356: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4641 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:51:01.356: INFO: >>> kubeConfig: /root/.kube/config I0511 16:51:01.387118 6 log.go:172] (0xc00561afd0) (0xc001038320) Create stream I0511 16:51:01.387145 6 log.go:172] (0xc00561afd0) (0xc001038320) Stream added, broadcasting: 1 I0511 16:51:01.389035 6 log.go:172] (0xc00561afd0) Reply frame received for 1 I0511 16:51:01.389066 6 log.go:172] (0xc00561afd0) (0xc000d2d680) Create stream I0511 16:51:01.389077 6 log.go:172] (0xc00561afd0) (0xc000d2d680) Stream added, broadcasting: 3 I0511 16:51:01.390045 6 log.go:172] (0xc00561afd0) Reply frame received for 3 I0511 16:51:01.390082 6 log.go:172] (0xc00561afd0) (0xc0027efcc0) Create stream I0511 16:51:01.390096 6 log.go:172] (0xc00561afd0) (0xc0027efcc0) Stream added, broadcasting: 5 I0511 16:51:01.390925 6 log.go:172] (0xc00561afd0) Reply frame received for 5 I0511 16:51:01.447999 6 log.go:172] (0xc00561afd0) Data frame received for 3 I0511 16:51:01.448038 6 log.go:172] (0xc000d2d680) (3) Data frame handling I0511 16:51:01.448079 6 log.go:172] (0xc00561afd0) Data frame received for 5 I0511 16:51:01.448121 6 log.go:172] (0xc0027efcc0) (5) Data frame handling I0511 16:51:01.448157 6 log.go:172] (0xc000d2d680) (3) Data frame sent I0511 16:51:01.448176 6 log.go:172] (0xc00561afd0) Data frame received for 3 I0511 16:51:01.448191 6 log.go:172] (0xc000d2d680) (3) Data frame handling I0511 16:51:01.449805 6 log.go:172] (0xc00561afd0) Data frame received for 1 I0511 16:51:01.449828 6 log.go:172] (0xc001038320) (1) Data frame handling I0511 16:51:01.449843 6 log.go:172] (0xc001038320) (1) Data frame sent I0511 16:51:01.449875 6 log.go:172] (0xc00561afd0) (0xc001038320) Stream removed, broadcasting: 1 I0511 16:51:01.449902 6 log.go:172] (0xc00561afd0) Go away received I0511 16:51:01.450084 6 log.go:172] (0xc00561afd0) (0xc001038320) Stream removed, broadcasting: 1 I0511 16:51:01.450110 6 log.go:172] (0xc00561afd0) (0xc000d2d680) Stream removed, broadcasting: 3 I0511 16:51:01.450122 6 log.go:172] (0xc00561afd0) (0xc0027efcc0) Stream removed, broadcasting: 5 May 11 16:51:01.450: INFO: Exec stderr: "" May 11 16:51:01.450: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4641 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:51:01.450: INFO: >>> kubeConfig: /root/.kube/config I0511 16:51:01.556013 6 log.go:172] (0xc005a0e2c0) (0xc000d12500) Create stream I0511 16:51:01.556050 6 log.go:172] (0xc005a0e2c0) (0xc000d12500) Stream added, broadcasting: 1 I0511 16:51:01.575978 6 log.go:172] (0xc005a0e2c0) Reply frame received for 1 I0511 16:51:01.576022 6 log.go:172] (0xc005a0e2c0) (0xc000d125a0) Create stream I0511 16:51:01.576033 6 log.go:172] (0xc005a0e2c0) (0xc000d125a0) Stream added, broadcasting: 3 I0511 16:51:01.576780 6 log.go:172] (0xc005a0e2c0) Reply frame received for 3 I0511 16:51:01.576805 6 log.go:172] (0xc005a0e2c0) (0xc000d12640) Create stream I0511 16:51:01.576815 6 log.go:172] (0xc005a0e2c0) (0xc000d12640) Stream added, broadcasting: 5 I0511 16:51:01.577743 6 log.go:172] (0xc005a0e2c0) Reply frame received for 5 I0511 16:51:01.638377 6 log.go:172] (0xc005a0e2c0) Data frame received for 3 I0511 16:51:01.638413 6 log.go:172] (0xc000d125a0) (3) Data frame handling I0511 16:51:01.638427 6 log.go:172] (0xc000d125a0) (3) Data frame sent I0511 16:51:01.638437 6 log.go:172] (0xc005a0e2c0) Data frame received for 3 I0511 16:51:01.638449 6 log.go:172] (0xc000d125a0) (3) Data frame handling I0511 16:51:01.638491 6 log.go:172] (0xc005a0e2c0) Data frame received for 5 I0511 16:51:01.638513 6 log.go:172] (0xc000d12640) (5) Data frame handling I0511 16:51:01.640158 6 log.go:172] (0xc005a0e2c0) Data frame received for 1 I0511 16:51:01.640190 6 log.go:172] (0xc000d12500) (1) Data frame handling I0511 16:51:01.640210 6 log.go:172] (0xc000d12500) (1) Data frame sent I0511 16:51:01.640272 6 log.go:172] (0xc005a0e2c0) (0xc000d12500) Stream removed, broadcasting: 1 I0511 16:51:01.640390 6 log.go:172] (0xc005a0e2c0) (0xc000d12500) Stream removed, broadcasting: 1 I0511 16:51:01.640410 6 log.go:172] (0xc005a0e2c0) (0xc000d125a0) Stream removed, broadcasting: 3 I0511 16:51:01.640639 6 log.go:172] (0xc005a0e2c0) (0xc000d12640) Stream removed, broadcasting: 5 May 11 16:51:01.640: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 11 16:51:01.640: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4641 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:51:01.640: INFO: >>> kubeConfig: /root/.kube/config I0511 16:51:01.643731 6 log.go:172] (0xc005a0e2c0) Go away received I0511 16:51:01.785804 6 log.go:172] (0xc001936000) (0xc001762000) Create stream I0511 16:51:01.785881 6 log.go:172] (0xc001936000) (0xc001762000) Stream added, broadcasting: 1 I0511 16:51:01.788836 6 log.go:172] (0xc001936000) Reply frame received for 1 I0511 16:51:01.788890 6 log.go:172] (0xc001936000) (0xc00227a000) Create stream I0511 16:51:01.788910 6 log.go:172] (0xc001936000) (0xc00227a000) Stream added, broadcasting: 3 I0511 16:51:01.790368 6 log.go:172] (0xc001936000) Reply frame received for 3 I0511 16:51:01.790419 6 log.go:172] (0xc001936000) (0xc0011980a0) Create stream I0511 16:51:01.790438 6 log.go:172] (0xc001936000) (0xc0011980a0) Stream added, broadcasting: 5 I0511 16:51:01.791500 6 log.go:172] (0xc001936000) Reply frame received for 5 I0511 16:51:01.842122 6 log.go:172] (0xc001936000) Data frame received for 5 I0511 16:51:01.842165 6 log.go:172] (0xc0011980a0) (5) Data frame handling I0511 16:51:01.842192 6 log.go:172] (0xc001936000) Data frame received for 3 I0511 16:51:01.842210 6 log.go:172] (0xc00227a000) (3) Data frame handling I0511 16:51:01.842234 6 log.go:172] (0xc00227a000) (3) Data frame sent I0511 16:51:01.842247 6 log.go:172] (0xc001936000) Data frame received for 3 I0511 16:51:01.842259 6 log.go:172] (0xc00227a000) (3) Data frame handling I0511 16:51:01.843742 6 log.go:172] (0xc001936000) Data frame received for 1 I0511 16:51:01.843767 6 log.go:172] (0xc001762000) (1) Data frame handling I0511 16:51:01.843780 6 log.go:172] (0xc001762000) (1) Data frame sent I0511 16:51:01.843805 6 log.go:172] (0xc001936000) (0xc001762000) Stream removed, broadcasting: 1 I0511 16:51:01.843854 6 log.go:172] (0xc001936000) Go away received I0511 16:51:01.844043 6 log.go:172] (0xc001936000) (0xc001762000) Stream removed, broadcasting: 1 I0511 16:51:01.844067 6 log.go:172] (0xc001936000) (0xc00227a000) Stream removed, broadcasting: 3 I0511 16:51:01.844087 6 log.go:172] (0xc001936000) (0xc0011980a0) Stream removed, broadcasting: 5 May 11 16:51:01.844: INFO: Exec stderr: "" May 11 16:51:01.844: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4641 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:51:01.844: INFO: >>> kubeConfig: /root/.kube/config I0511 16:51:01.877398 6 log.go:172] (0xc001936630) (0xc001762960) Create stream I0511 16:51:01.877436 6 log.go:172] (0xc001936630) (0xc001762960) Stream added, broadcasting: 1 I0511 16:51:01.879287 6 log.go:172] (0xc001936630) Reply frame received for 1 I0511 16:51:01.879312 6 log.go:172] (0xc001936630) (0xc001198780) Create stream I0511 16:51:01.879319 6 log.go:172] (0xc001936630) (0xc001198780) Stream added, broadcasting: 3 I0511 16:51:01.880017 6 log.go:172] (0xc001936630) Reply frame received for 3 I0511 16:51:01.880061 6 log.go:172] (0xc001936630) (0xc0012be000) Create stream I0511 16:51:01.880077 6 log.go:172] (0xc001936630) (0xc0012be000) Stream added, broadcasting: 5 I0511 16:51:01.880649 6 log.go:172] (0xc001936630) Reply frame received for 5 I0511 16:51:01.945574 6 log.go:172] (0xc001936630) Data frame received for 5 I0511 16:51:01.945622 6 log.go:172] (0xc0012be000) (5) Data frame handling I0511 16:51:01.945665 6 log.go:172] (0xc001936630) Data frame received for 3 I0511 16:51:01.945676 6 log.go:172] (0xc001198780) (3) Data frame handling I0511 16:51:01.945685 6 log.go:172] (0xc001198780) (3) Data frame sent I0511 16:51:01.945694 6 log.go:172] (0xc001936630) Data frame received for 3 I0511 16:51:01.945699 6 log.go:172] (0xc001198780) (3) Data frame handling I0511 16:51:01.947046 6 log.go:172] (0xc001936630) Data frame received for 1 I0511 16:51:01.947085 6 log.go:172] (0xc001762960) (1) Data frame handling I0511 16:51:01.947165 6 log.go:172] (0xc001762960) (1) Data frame sent I0511 16:51:01.947190 6 log.go:172] (0xc001936630) (0xc001762960) Stream removed, broadcasting: 1 I0511 16:51:01.947266 6 log.go:172] (0xc001936630) (0xc001762960) Stream removed, broadcasting: 1 I0511 16:51:01.947281 6 log.go:172] (0xc001936630) (0xc001198780) Stream removed, broadcasting: 3 I0511 16:51:01.947525 6 log.go:172] (0xc001936630) (0xc0012be000) Stream removed, broadcasting: 5 I0511 16:51:01.947687 6 log.go:172] (0xc001936630) Go away received May 11 16:51:01.947: INFO: Exec stderr: "" May 11 16:51:01.947: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4641 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:51:01.947: INFO: >>> kubeConfig: /root/.kube/config I0511 16:51:01.976645 6 log.go:172] (0xc001936c60) (0xc001763400) Create stream I0511 16:51:01.976679 6 log.go:172] (0xc001936c60) (0xc001763400) Stream added, broadcasting: 1 I0511 16:51:01.979104 6 log.go:172] (0xc001936c60) Reply frame received for 1 I0511 16:51:01.979145 6 log.go:172] (0xc001936c60) (0xc0011995e0) Create stream I0511 16:51:01.979158 6 log.go:172] (0xc001936c60) (0xc0011995e0) Stream added, broadcasting: 3 I0511 16:51:01.979976 6 log.go:172] (0xc001936c60) Reply frame received for 3 I0511 16:51:01.980033 6 log.go:172] (0xc001936c60) (0xc001199860) Create stream I0511 16:51:01.980053 6 log.go:172] (0xc001936c60) (0xc001199860) Stream added, broadcasting: 5 I0511 16:51:01.980951 6 log.go:172] (0xc001936c60) Reply frame received for 5 I0511 16:51:02.046881 6 log.go:172] (0xc001936c60) Data frame received for 5 I0511 16:51:02.046943 6 log.go:172] (0xc001199860) (5) Data frame handling I0511 16:51:02.047005 6 log.go:172] (0xc001936c60) Data frame received for 3 I0511 16:51:02.047042 6 log.go:172] (0xc0011995e0) (3) Data frame handling I0511 16:51:02.047076 6 log.go:172] (0xc0011995e0) (3) Data frame sent I0511 16:51:02.047086 6 log.go:172] (0xc001936c60) Data frame received for 3 I0511 16:51:02.047097 6 log.go:172] (0xc0011995e0) (3) Data frame handling I0511 16:51:02.048921 6 log.go:172] (0xc001936c60) Data frame received for 1 I0511 16:51:02.048957 6 log.go:172] (0xc001763400) (1) Data frame handling I0511 16:51:02.048975 6 log.go:172] (0xc001763400) (1) Data frame sent I0511 16:51:02.049009 6 log.go:172] (0xc001936c60) (0xc001763400) Stream removed, broadcasting: 1 I0511 16:51:02.049060 6 log.go:172] (0xc001936c60) Go away received I0511 16:51:02.049422 6 log.go:172] (0xc001936c60) (0xc001763400) Stream removed, broadcasting: 1 I0511 16:51:02.049450 6 log.go:172] (0xc001936c60) (0xc0011995e0) Stream removed, broadcasting: 3 I0511 16:51:02.049464 6 log.go:172] (0xc001936c60) (0xc001199860) Stream removed, broadcasting: 5 May 11 16:51:02.049: INFO: Exec stderr: "" May 11 16:51:02.049: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4641 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 16:51:02.049: INFO: >>> kubeConfig: /root/.kube/config I0511 16:51:02.079422 6 log.go:172] (0xc0028aac60) (0xc001199f40) Create stream I0511 16:51:02.079450 6 log.go:172] (0xc0028aac60) (0xc001199f40) Stream added, broadcasting: 1 I0511 16:51:02.081743 6 log.go:172] (0xc0028aac60) Reply frame received for 1 I0511 16:51:02.081775 6 log.go:172] (0xc0028aac60) (0xc001136140) Create stream I0511 16:51:02.081783 6 log.go:172] (0xc0028aac60) (0xc001136140) Stream added, broadcasting: 3 I0511 16:51:02.082546 6 log.go:172] (0xc0028aac60) Reply frame received for 3 I0511 16:51:02.082578 6 log.go:172] (0xc0028aac60) (0xc0012be0a0) Create stream I0511 16:51:02.082589 6 log.go:172] (0xc0028aac60) (0xc0012be0a0) Stream added, broadcasting: 5 I0511 16:51:02.083414 6 log.go:172] (0xc0028aac60) Reply frame received for 5 I0511 16:51:02.144098 6 log.go:172] (0xc0028aac60) Data frame received for 5 I0511 16:51:02.144162 6 log.go:172] (0xc0028aac60) Data frame received for 3 I0511 16:51:02.144218 6 log.go:172] (0xc001136140) (3) Data frame handling I0511 16:51:02.144232 6 log.go:172] (0xc001136140) (3) Data frame sent I0511 16:51:02.144243 6 log.go:172] (0xc0028aac60) Data frame received for 3 I0511 16:51:02.144256 6 log.go:172] (0xc001136140) (3) Data frame handling I0511 16:51:02.144318 6 log.go:172] (0xc0012be0a0) (5) Data frame handling I0511 16:51:02.146814 6 log.go:172] (0xc0028aac60) Data frame received for 1 I0511 16:51:02.146855 6 log.go:172] (0xc001199f40) (1) Data frame handling I0511 16:51:02.146882 6 log.go:172] (0xc001199f40) (1) Data frame sent I0511 16:51:02.146903 6 log.go:172] (0xc0028aac60) (0xc001199f40) Stream removed, broadcasting: 1 I0511 16:51:02.146926 6 log.go:172] (0xc0028aac60) Go away received I0511 16:51:02.147242 6 log.go:172] (0xc0028aac60) (0xc001199f40) Stream removed, broadcasting: 1 I0511 16:51:02.147283 6 log.go:172] (0xc0028aac60) (0xc001136140) Stream removed, broadcasting: 3 I0511 16:51:02.147315 6 log.go:172] (0xc0028aac60) (0xc0012be0a0) Stream removed, broadcasting: 5 May 11 16:51:02.147: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:51:02.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4641" for this suite. • [SLOW TEST:21.322 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:51:02.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 11 16:51:02.779: INFO: Waiting up to 5m0s for pod "client-containers-712c85a4-9192-48ed-a7e0-a3a75d201788" in namespace "containers-3938" to be "success or failure" May 11 16:51:02.807: INFO: Pod "client-containers-712c85a4-9192-48ed-a7e0-a3a75d201788": Phase="Pending", Reason="", readiness=false. Elapsed: 27.863023ms May 11 16:51:04.810: INFO: Pod "client-containers-712c85a4-9192-48ed-a7e0-a3a75d201788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031787595s May 11 16:51:06.814: INFO: Pod "client-containers-712c85a4-9192-48ed-a7e0-a3a75d201788": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035415903s May 11 16:51:08.875: INFO: Pod "client-containers-712c85a4-9192-48ed-a7e0-a3a75d201788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095894431s STEP: Saw pod success May 11 16:51:08.875: INFO: Pod "client-containers-712c85a4-9192-48ed-a7e0-a3a75d201788" satisfied condition "success or failure" May 11 16:51:08.944: INFO: Trying to get logs from node jerma-worker2 pod client-containers-712c85a4-9192-48ed-a7e0-a3a75d201788 container test-container: STEP: delete the pod May 11 16:51:09.531: INFO: Waiting for pod client-containers-712c85a4-9192-48ed-a7e0-a3a75d201788 to disappear May 11 16:51:09.725: INFO: Pod client-containers-712c85a4-9192-48ed-a7e0-a3a75d201788 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:51:09.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3938" for this suite. • [SLOW TEST:7.621 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4472,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:51:09.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 16:51:10.392: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd6a7ccb-c5d9-4c44-ab4d-0f0a4822fbbd" in namespace "downward-api-7828" to be "success or failure" May 11 16:51:10.486: INFO: Pod "downwardapi-volume-dd6a7ccb-c5d9-4c44-ab4d-0f0a4822fbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 93.801681ms May 11 16:51:12.537: INFO: Pod "downwardapi-volume-dd6a7ccb-c5d9-4c44-ab4d-0f0a4822fbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14558185s May 11 16:51:14.542: INFO: Pod "downwardapi-volume-dd6a7ccb-c5d9-4c44-ab4d-0f0a4822fbbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150310475s STEP: Saw pod success May 11 16:51:14.542: INFO: Pod "downwardapi-volume-dd6a7ccb-c5d9-4c44-ab4d-0f0a4822fbbd" satisfied condition "success or failure" May 11 16:51:14.546: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dd6a7ccb-c5d9-4c44-ab4d-0f0a4822fbbd container client-container: STEP: delete the pod May 11 16:51:14.907: INFO: Waiting for pod downwardapi-volume-dd6a7ccb-c5d9-4c44-ab4d-0f0a4822fbbd to disappear May 11 16:51:15.175: INFO: Pod downwardapi-volume-dd6a7ccb-c5d9-4c44-ab4d-0f0a4822fbbd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:51:15.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7828" for this suite. • [SLOW TEST:5.464 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:51:15.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 11 16:51:15.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3296' May 11 16:51:16.524: INFO: stderr: "" May 11 16:51:16.524: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 16:51:16.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3296' May 11 16:51:16.981: INFO: stderr: "" May 11 16:51:16.981: INFO: stdout: "update-demo-nautilus-89fjm update-demo-nautilus-zffkh " May 11 16:51:16.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89fjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:17.351: INFO: stderr: "" May 11 16:51:17.351: INFO: stdout: "" May 11 16:51:17.351: INFO: update-demo-nautilus-89fjm is created but not running May 11 16:51:22.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3296' May 11 16:51:22.459: INFO: stderr: "" May 11 16:51:22.459: INFO: stdout: "update-demo-nautilus-89fjm update-demo-nautilus-zffkh " May 11 16:51:22.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89fjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:22.541: INFO: stderr: "" May 11 16:51:22.541: INFO: stdout: "true" May 11 16:51:22.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89fjm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:22.638: INFO: stderr: "" May 11 16:51:22.638: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 16:51:22.638: INFO: validating pod update-demo-nautilus-89fjm May 11 16:51:22.641: INFO: got data: { "image": "nautilus.jpg" } May 11 16:51:22.641: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 16:51:22.641: INFO: update-demo-nautilus-89fjm is verified up and running May 11 16:51:22.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zffkh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:22.751: INFO: stderr: "" May 11 16:51:22.751: INFO: stdout: "true" May 11 16:51:22.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zffkh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:22.843: INFO: stderr: "" May 11 16:51:22.843: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 16:51:22.843: INFO: validating pod update-demo-nautilus-zffkh May 11 16:51:22.846: INFO: got data: { "image": "nautilus.jpg" } May 11 16:51:22.846: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 16:51:22.846: INFO: update-demo-nautilus-zffkh is verified up and running STEP: scaling down the replication controller May 11 16:51:22.849: INFO: scanned /root for discovery docs: May 11 16:51:22.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3296' May 11 16:51:24.582: INFO: stderr: "" May 11 16:51:24.582: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 16:51:24.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3296' May 11 16:51:24.793: INFO: stderr: "" May 11 16:51:24.793: INFO: stdout: "update-demo-nautilus-89fjm update-demo-nautilus-zffkh " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 16:51:29.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3296' May 11 16:51:29.888: INFO: stderr: "" May 11 16:51:29.888: INFO: stdout: "update-demo-nautilus-zffkh " May 11 16:51:29.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zffkh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:29.977: INFO: stderr: "" May 11 16:51:29.977: INFO: stdout: "true" May 11 16:51:29.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zffkh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:30.068: INFO: stderr: "" May 11 16:51:30.068: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 16:51:30.068: INFO: validating pod update-demo-nautilus-zffkh May 11 16:51:30.071: INFO: got data: { "image": "nautilus.jpg" } May 11 16:51:30.071: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 16:51:30.071: INFO: update-demo-nautilus-zffkh is verified up and running STEP: scaling up the replication controller May 11 16:51:30.073: INFO: scanned /root for discovery docs: May 11 16:51:30.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3296' May 11 16:51:31.189: INFO: stderr: "" May 11 16:51:31.189: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 16:51:31.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3296' May 11 16:51:31.281: INFO: stderr: "" May 11 16:51:31.281: INFO: stdout: "update-demo-nautilus-5rw8n update-demo-nautilus-zffkh " May 11 16:51:31.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5rw8n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:31.376: INFO: stderr: "" May 11 16:51:31.376: INFO: stdout: "" May 11 16:51:31.376: INFO: update-demo-nautilus-5rw8n is created but not running May 11 16:51:36.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3296' May 11 16:51:36.478: INFO: stderr: "" May 11 16:51:36.478: INFO: stdout: "update-demo-nautilus-5rw8n update-demo-nautilus-zffkh " May 11 16:51:36.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5rw8n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:36.577: INFO: stderr: "" May 11 16:51:36.577: INFO: stdout: "true" May 11 16:51:36.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5rw8n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:36.667: INFO: stderr: "" May 11 16:51:36.667: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 16:51:36.667: INFO: validating pod update-demo-nautilus-5rw8n May 11 16:51:36.671: INFO: got data: { "image": "nautilus.jpg" } May 11 16:51:36.671: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 16:51:36.671: INFO: update-demo-nautilus-5rw8n is verified up and running May 11 16:51:36.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zffkh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:36.765: INFO: stderr: "" May 11 16:51:36.765: INFO: stdout: "true" May 11 16:51:36.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zffkh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3296' May 11 16:51:36.843: INFO: stderr: "" May 11 16:51:36.843: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 16:51:36.843: INFO: validating pod update-demo-nautilus-zffkh May 11 16:51:36.846: INFO: got data: { "image": "nautilus.jpg" } May 11 16:51:36.846: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 16:51:36.846: INFO: update-demo-nautilus-zffkh is verified up and running STEP: using delete to clean up resources May 11 16:51:36.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3296' May 11 16:51:36.984: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 16:51:36.984: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 16:51:36.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3296' May 11 16:51:37.091: INFO: stderr: "No resources found in kubectl-3296 namespace.\n" May 11 16:51:37.091: INFO: stdout: "" May 11 16:51:37.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3296 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 16:51:37.170: INFO: stderr: "" May 11 16:51:37.170: INFO: stdout: "update-demo-nautilus-5rw8n\nupdate-demo-nautilus-zffkh\n" May 11 16:51:37.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3296' May 11 16:51:37.767: INFO: stderr: "No resources found in kubectl-3296 namespace.\n" May 11 16:51:37.767: INFO: stdout: "" May 11 16:51:37.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3296 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 16:51:37.855: INFO: stderr: "" May 11 16:51:37.855: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:51:37.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3296" for this suite. • [SLOW TEST:22.660 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":277,"skipped":4503,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 16:51:37.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 11 16:51:39.390: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 16:51:50.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2974" for this suite. • [SLOW TEST:13.086 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":278,"skipped":4523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 11 16:51:50.991: INFO: Running AfterSuite actions on all nodes May 11 16:51:50.991: INFO: Running AfterSuite actions on node 1 May 11 16:51:50.991: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 5964.334 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS