I0329 21:06:29.383763 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0329 21:06:29.384002 6 e2e.go:109] Starting e2e run "45d88c4e-a8a4-4b33-89ed-9a8e86a8499e" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585515988 - Will randomize all specs Will run 278 of 4843 specs Mar 29 21:06:29.436: INFO: >>> kubeConfig: /root/.kube/config Mar 29 21:06:29.442: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 29 21:06:29.462: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 29 21:06:29.509: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 29 21:06:29.509: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 29 21:06:29.509: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 29 21:06:29.519: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 29 21:06:29.519: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 29 21:06:29.519: INFO: e2e test version: v1.17.3 Mar 29 21:06:29.520: INFO: kube-apiserver version: v1.17.2 Mar 29 21:06:29.520: INFO: >>> kubeConfig: /root/.kube/config Mar 29 21:06:29.524: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:06:29.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test Mar 29 21:06:29.585: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:06:33.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4110" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:06:33.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:06:33.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 29 21:06:34.341: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-29T21:06:34Z generation:1 name:name1 resourceVersion:3780693 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:5cd23915-ca37-48b2-848f-b93a10ffbbd2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 29 21:06:44.347: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-29T21:06:44Z generation:1 name:name2 resourceVersion:3780734 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d2014f0a-c0bc-4c09-9db4-9ce0b9d87ff4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 29 21:06:54.362: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-29T21:06:34Z generation:2 name:name1 resourceVersion:3780764 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:5cd23915-ca37-48b2-848f-b93a10ffbbd2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 29 21:07:04.368: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-29T21:06:44Z generation:2 name:name2 resourceVersion:3780792 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d2014f0a-c0bc-4c09-9db4-9ce0b9d87ff4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 29 21:07:14.376: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-29T21:06:34Z generation:2 name:name1 resourceVersion:3780822 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:5cd23915-ca37-48b2-848f-b93a10ffbbd2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 29 21:07:24.382: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-29T21:06:44Z generation:2 name:name2 resourceVersion:3780852 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d2014f0a-c0bc-4c09-9db4-9ce0b9d87ff4] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:07:34.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5220" for this suite. • [SLOW TEST:61.266 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":2,"skipped":63,"failed":0} [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:07:34.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 29 21:07:34.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4215' Mar 29 21:07:37.325: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 29 21:07:37.325: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738 Mar 29 21:07:39.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4215' Mar 29 21:07:39.506: INFO: stderr: "" Mar 29 21:07:39.506: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:07:39.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4215" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":3,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:07:39.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:07:40.156: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:07:42.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721112860, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721112860, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721112860, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721112860, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:07:45.193: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 29 21:07:49.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-8684 to-be-attached-pod -i -c=container1' Mar 29 21:07:49.385: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:07:49.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8684" for this suite. STEP: Destroying namespace "webhook-8684-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.931 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":4,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:07:49.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-31ff5721-ee3a-4708-a6ba-c97bf6c7465e STEP: Creating a pod to test consume configMaps Mar 29 21:07:49.579: INFO: Waiting up to 5m0s for pod "pod-configmaps-4660d481-303d-40d5-bc4b-5003654c7b51" in namespace "configmap-6877" to be "success or failure" Mar 29 21:07:49.627: INFO: Pod "pod-configmaps-4660d481-303d-40d5-bc4b-5003654c7b51": Phase="Pending", Reason="", readiness=false. Elapsed: 47.369515ms Mar 29 21:07:51.631: INFO: Pod "pod-configmaps-4660d481-303d-40d5-bc4b-5003654c7b51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051193718s Mar 29 21:07:53.635: INFO: Pod "pod-configmaps-4660d481-303d-40d5-bc4b-5003654c7b51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055017952s STEP: Saw pod success Mar 29 21:07:53.635: INFO: Pod "pod-configmaps-4660d481-303d-40d5-bc4b-5003654c7b51" satisfied condition "success or failure" Mar 29 21:07:53.637: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4660d481-303d-40d5-bc4b-5003654c7b51 container configmap-volume-test: STEP: delete the pod Mar 29 21:07:53.687: INFO: Waiting for pod pod-configmaps-4660d481-303d-40d5-bc4b-5003654c7b51 to disappear Mar 29 21:07:53.701: INFO: Pod pod-configmaps-4660d481-303d-40d5-bc4b-5003654c7b51 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:07:53.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6877" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":144,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:07:53.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-874c918e-784d-4170-8c93-9f4c79cfb037 STEP: Creating a pod to test consume configMaps Mar 29 21:07:53.787: INFO: Waiting up to 5m0s for pod "pod-configmaps-1cd39ebf-19be-499a-bfe9-d3de5edc5155" in namespace "configmap-4666" to be "success or failure" Mar 29 21:07:53.791: INFO: Pod "pod-configmaps-1cd39ebf-19be-499a-bfe9-d3de5edc5155": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070718ms Mar 29 21:07:55.795: INFO: Pod "pod-configmaps-1cd39ebf-19be-499a-bfe9-d3de5edc5155": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008307592s Mar 29 21:07:57.838: INFO: Pod "pod-configmaps-1cd39ebf-19be-499a-bfe9-d3de5edc5155": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050845362s STEP: Saw pod success Mar 29 21:07:57.838: INFO: Pod "pod-configmaps-1cd39ebf-19be-499a-bfe9-d3de5edc5155" satisfied condition "success or failure" Mar 29 21:07:57.845: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1cd39ebf-19be-499a-bfe9-d3de5edc5155 container configmap-volume-test: STEP: delete the pod Mar 29 21:07:57.858: INFO: Waiting for pod pod-configmaps-1cd39ebf-19be-499a-bfe9-d3de5edc5155 to disappear Mar 29 21:07:57.863: INFO: Pod pod-configmaps-1cd39ebf-19be-499a-bfe9-d3de5edc5155 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:07:57.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4666" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":145,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:07:57.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2504 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-2504 Mar 29 21:07:58.037: INFO: Found 0 stateful pods, waiting for 1 Mar 29 21:08:08.042: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 29 21:08:08.076: INFO: Deleting all statefulset in ns statefulset-2504 Mar 29 21:08:08.090: INFO: Scaling statefulset ss to 0 Mar 29 21:08:38.141: INFO: Waiting for statefulset status.replicas updated to 0 Mar 29 21:08:38.144: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:08:38.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2504" for this suite. • [SLOW TEST:40.283 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":7,"skipped":153,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:08:38.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-x64t STEP: Creating a pod to test atomic-volume-subpath Mar 29 21:08:38.267: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-x64t" in namespace "subpath-1906" to be "success or failure" Mar 29 21:08:38.271: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Pending", Reason="", readiness=false. Elapsed: 3.385638ms Mar 29 21:08:40.384: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116662369s Mar 29 21:08:42.389: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Running", Reason="", readiness=true. Elapsed: 4.121023688s Mar 29 21:08:44.392: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Running", Reason="", readiness=true. Elapsed: 6.124477312s Mar 29 21:08:46.396: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Running", Reason="", readiness=true. Elapsed: 8.128152477s Mar 29 21:08:48.400: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Running", Reason="", readiness=true. Elapsed: 10.13244433s Mar 29 21:08:50.404: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Running", Reason="", readiness=true. Elapsed: 12.1363594s Mar 29 21:08:52.408: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Running", Reason="", readiness=true. Elapsed: 14.140067778s Mar 29 21:08:54.418: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Running", Reason="", readiness=true. Elapsed: 16.150802458s Mar 29 21:08:56.422: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Running", Reason="", readiness=true. Elapsed: 18.154039652s Mar 29 21:08:58.425: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Running", Reason="", readiness=true. Elapsed: 20.157660436s Mar 29 21:09:00.432: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Running", Reason="", readiness=true. Elapsed: 22.164048437s Mar 29 21:09:02.436: INFO: Pod "pod-subpath-test-secret-x64t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.168042259s STEP: Saw pod success Mar 29 21:09:02.436: INFO: Pod "pod-subpath-test-secret-x64t" satisfied condition "success or failure" Mar 29 21:09:02.439: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-x64t container test-container-subpath-secret-x64t: STEP: delete the pod Mar 29 21:09:02.487: INFO: Waiting for pod pod-subpath-test-secret-x64t to disappear Mar 29 21:09:02.491: INFO: Pod pod-subpath-test-secret-x64t no longer exists STEP: Deleting pod pod-subpath-test-secret-x64t Mar 29 21:09:02.491: INFO: Deleting pod "pod-subpath-test-secret-x64t" in namespace "subpath-1906" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:09:02.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1906" for this suite. • [SLOW TEST:24.308 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":8,"skipped":165,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:09:02.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 29 21:09:07.107: INFO: Successfully updated pod "annotationupdate61251e7a-9875-4a5e-8e20-234c9f856c9d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:09:09.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2443" for this suite. • [SLOW TEST:6.629 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":166,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:09:09.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:09:09.189: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 29 21:09:11.240: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:09:12.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9905" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":10,"skipped":171,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:09:12.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 29 21:09:12.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4460' Mar 29 21:09:12.651: INFO: stderr: "" Mar 29 21:09:12.651: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866 Mar 29 21:09:12.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4460' Mar 29 21:09:19.494: INFO: stderr: "" Mar 29 21:09:19.495: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:09:19.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4460" for this suite. • [SLOW TEST:7.250 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1857 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":11,"skipped":174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:09:19.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:09:19.933: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:09:21.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721112959, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721112959, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721112960, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721112959, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:09:25.009: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:09:25.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5339" for this suite. STEP: Destroying namespace "webhook-5339-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.788 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":12,"skipped":282,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:09:25.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:09:36.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9067" for this suite. • [SLOW TEST:11.245 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":13,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:09:36.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-739ee889-34ba-4fe5-ae43-c0a3b0b236a7 STEP: Creating a pod to test consume secrets Mar 29 21:09:36.674: INFO: Waiting up to 5m0s for pod "pod-secrets-f8309ce3-107f-417d-a866-cdc97ccab06f" in namespace "secrets-5914" to be "success or failure" Mar 29 21:09:36.686: INFO: Pod "pod-secrets-f8309ce3-107f-417d-a866-cdc97ccab06f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.352913ms Mar 29 21:09:38.690: INFO: Pod "pod-secrets-f8309ce3-107f-417d-a866-cdc97ccab06f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016011159s Mar 29 21:09:40.694: INFO: Pod "pod-secrets-f8309ce3-107f-417d-a866-cdc97ccab06f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020164821s STEP: Saw pod success Mar 29 21:09:40.694: INFO: Pod "pod-secrets-f8309ce3-107f-417d-a866-cdc97ccab06f" satisfied condition "success or failure" Mar 29 21:09:40.697: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-f8309ce3-107f-417d-a866-cdc97ccab06f container secret-volume-test: STEP: delete the pod Mar 29 21:09:40.762: INFO: Waiting for pod pod-secrets-f8309ce3-107f-417d-a866-cdc97ccab06f to disappear Mar 29 21:09:40.782: INFO: Pod pod-secrets-f8309ce3-107f-417d-a866-cdc97ccab06f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:09:40.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5914" for this suite. STEP: Destroying namespace "secret-namespace-3186" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":350,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:09:40.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 29 21:09:40.914: INFO: Waiting up to 5m0s for pod "pod-8d269529-5bd1-4eba-8471-62870a2cd1ca" in namespace "emptydir-1058" to be "success or failure" Mar 29 21:09:40.936: INFO: Pod "pod-8d269529-5bd1-4eba-8471-62870a2cd1ca": Phase="Pending", Reason="", readiness=false. Elapsed: 21.691077ms Mar 29 21:09:42.940: INFO: Pod "pod-8d269529-5bd1-4eba-8471-62870a2cd1ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025325402s Mar 29 21:09:44.944: INFO: Pod "pod-8d269529-5bd1-4eba-8471-62870a2cd1ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029382534s STEP: Saw pod success Mar 29 21:09:44.944: INFO: Pod "pod-8d269529-5bd1-4eba-8471-62870a2cd1ca" satisfied condition "success or failure" Mar 29 21:09:44.947: INFO: Trying to get logs from node jerma-worker pod pod-8d269529-5bd1-4eba-8471-62870a2cd1ca container test-container: STEP: delete the pod Mar 29 21:09:44.971: INFO: Waiting for pod pod-8d269529-5bd1-4eba-8471-62870a2cd1ca to disappear Mar 29 21:09:44.973: INFO: Pod pod-8d269529-5bd1-4eba-8471-62870a2cd1ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:09:44.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1058" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":358,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:09:44.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-928143fd-cf0f-4573-ba2f-89a4578eff0b STEP: Creating a pod to test consume configMaps Mar 29 21:09:45.065: INFO: Waiting up to 5m0s for pod "pod-configmaps-1bc54c99-aa30-4c1d-a583-d614bf71aff2" in namespace "configmap-8966" to be "success or failure" Mar 29 21:09:45.082: INFO: Pod "pod-configmaps-1bc54c99-aa30-4c1d-a583-d614bf71aff2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.740256ms Mar 29 21:09:47.086: INFO: Pod "pod-configmaps-1bc54c99-aa30-4c1d-a583-d614bf71aff2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021024147s Mar 29 21:09:49.091: INFO: Pod "pod-configmaps-1bc54c99-aa30-4c1d-a583-d614bf71aff2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025245694s STEP: Saw pod success Mar 29 21:09:49.091: INFO: Pod "pod-configmaps-1bc54c99-aa30-4c1d-a583-d614bf71aff2" satisfied condition "success or failure" Mar 29 21:09:49.094: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1bc54c99-aa30-4c1d-a583-d614bf71aff2 container configmap-volume-test: STEP: delete the pod Mar 29 21:09:49.126: INFO: Waiting for pod pod-configmaps-1bc54c99-aa30-4c1d-a583-d614bf71aff2 to disappear Mar 29 21:09:49.135: INFO: Pod pod-configmaps-1bc54c99-aa30-4c1d-a583-d614bf71aff2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:09:49.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8966" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":361,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:09:49.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 29 21:09:49.203: INFO: Waiting up to 5m0s for pod "pod-996e1971-e362-4ced-9364-31155698c03d" in namespace "emptydir-5235" to be "success or failure" Mar 29 21:09:49.207: INFO: Pod "pod-996e1971-e362-4ced-9364-31155698c03d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341985ms Mar 29 21:09:51.233: INFO: Pod "pod-996e1971-e362-4ced-9364-31155698c03d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030519157s Mar 29 21:09:53.237: INFO: Pod "pod-996e1971-e362-4ced-9364-31155698c03d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034238991s STEP: Saw pod success Mar 29 21:09:53.237: INFO: Pod "pod-996e1971-e362-4ced-9364-31155698c03d" satisfied condition "success or failure" Mar 29 21:09:53.240: INFO: Trying to get logs from node jerma-worker2 pod pod-996e1971-e362-4ced-9364-31155698c03d container test-container: STEP: delete the pod Mar 29 21:09:53.275: INFO: Waiting for pod pod-996e1971-e362-4ced-9364-31155698c03d to disappear Mar 29 21:09:53.301: INFO: Pod pod-996e1971-e362-4ced-9364-31155698c03d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:09:53.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5235" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":361,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:09:53.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0329 21:10:03.417401 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 29 21:10:03.417: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:10:03.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5457" for this suite. • [SLOW TEST:10.114 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":18,"skipped":366,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:10:03.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 29 21:10:07.488: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5334 PodName:pod-sharedvolume-8534cd6f-bdc8-4352-a442-436835bdf448 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:10:07.488: INFO: >>> kubeConfig: /root/.kube/config I0329 21:10:07.524852 6 log.go:172] (0xc001ce6160) (0xc00183c1e0) Create stream I0329 21:10:07.524900 6 log.go:172] (0xc001ce6160) (0xc00183c1e0) Stream added, broadcasting: 1 I0329 21:10:07.527875 6 log.go:172] (0xc001ce6160) Reply frame received for 1 I0329 21:10:07.527923 6 log.go:172] (0xc001ce6160) (0xc001acb0e0) Create stream I0329 21:10:07.527945 6 log.go:172] (0xc001ce6160) (0xc001acb0e0) Stream added, broadcasting: 3 I0329 21:10:07.528931 6 log.go:172] (0xc001ce6160) Reply frame received for 3 I0329 21:10:07.528974 6 log.go:172] (0xc001ce6160) (0xc0023514a0) Create stream I0329 21:10:07.528995 6 log.go:172] (0xc001ce6160) (0xc0023514a0) Stream added, broadcasting: 5 I0329 21:10:07.530049 6 log.go:172] (0xc001ce6160) Reply frame received for 5 I0329 21:10:07.586473 6 log.go:172] (0xc001ce6160) Data frame received for 5 I0329 21:10:07.586529 6 log.go:172] (0xc0023514a0) (5) Data frame handling I0329 21:10:07.586572 6 log.go:172] (0xc001ce6160) Data frame received for 3 I0329 21:10:07.586594 6 log.go:172] (0xc001acb0e0) (3) Data frame handling I0329 21:10:07.586627 6 log.go:172] (0xc001acb0e0) (3) Data frame sent I0329 21:10:07.586653 6 log.go:172] (0xc001ce6160) Data frame received for 3 I0329 21:10:07.586674 6 log.go:172] (0xc001acb0e0) (3) Data frame handling I0329 21:10:07.587912 6 log.go:172] (0xc001ce6160) Data frame received for 1 I0329 21:10:07.587933 6 log.go:172] (0xc00183c1e0) (1) Data frame handling I0329 21:10:07.587950 6 log.go:172] (0xc00183c1e0) (1) Data frame sent I0329 21:10:07.587964 6 log.go:172] (0xc001ce6160) (0xc00183c1e0) Stream removed, broadcasting: 1 I0329 21:10:07.587985 6 log.go:172] (0xc001ce6160) Go away received I0329 21:10:07.588441 6 log.go:172] (0xc001ce6160) (0xc00183c1e0) Stream removed, broadcasting: 1 I0329 21:10:07.588475 6 log.go:172] (0xc001ce6160) (0xc001acb0e0) Stream removed, broadcasting: 3 I0329 21:10:07.588493 6 log.go:172] (0xc001ce6160) (0xc0023514a0) Stream removed, broadcasting: 5 Mar 29 21:10:07.588: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:10:07.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5334" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":19,"skipped":369,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:10:07.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-5f86479f-7c8a-417a-b92a-9c3e306a7b2a STEP: Creating a pod to test consume configMaps Mar 29 21:10:07.726: INFO: Waiting up to 5m0s for pod "pod-configmaps-4619d3d3-d86d-4335-957e-d24b328878ed" in namespace "configmap-4402" to be "success or failure" Mar 29 21:10:07.737: INFO: Pod "pod-configmaps-4619d3d3-d86d-4335-957e-d24b328878ed": Phase="Pending", Reason="", readiness=false. Elapsed: 11.105706ms Mar 29 21:10:09.741: INFO: Pod "pod-configmaps-4619d3d3-d86d-4335-957e-d24b328878ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015208461s Mar 29 21:10:11.745: INFO: Pod "pod-configmaps-4619d3d3-d86d-4335-957e-d24b328878ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019388383s STEP: Saw pod success Mar 29 21:10:11.745: INFO: Pod "pod-configmaps-4619d3d3-d86d-4335-957e-d24b328878ed" satisfied condition "success or failure" Mar 29 21:10:11.749: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4619d3d3-d86d-4335-957e-d24b328878ed container configmap-volume-test: STEP: delete the pod Mar 29 21:10:11.779: INFO: Waiting for pod pod-configmaps-4619d3d3-d86d-4335-957e-d24b328878ed to disappear Mar 29 21:10:11.795: INFO: Pod pod-configmaps-4619d3d3-d86d-4335-957e-d24b328878ed no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:10:11.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4402" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":376,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:10:11.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-884 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 29 21:10:11.856: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 29 21:10:33.990: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.221:8080/dial?request=hostname&protocol=udp&host=10.244.1.156&port=8081&tries=1'] Namespace:pod-network-test-884 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:10:33.990: INFO: >>> kubeConfig: /root/.kube/config I0329 21:10:34.016141 6 log.go:172] (0xc002a7f600) (0xc001f006e0) Create stream I0329 21:10:34.016167 6 log.go:172] (0xc002a7f600) (0xc001f006e0) Stream added, broadcasting: 1 I0329 21:10:34.019710 6 log.go:172] (0xc002a7f600) Reply frame received for 1 I0329 21:10:34.019761 6 log.go:172] (0xc002a7f600) (0xc00183c280) Create stream I0329 21:10:34.019779 6 log.go:172] (0xc002a7f600) (0xc00183c280) Stream added, broadcasting: 3 I0329 21:10:34.020822 6 log.go:172] (0xc002a7f600) Reply frame received for 3 I0329 21:10:34.020859 6 log.go:172] (0xc002a7f600) (0xc001e94000) Create stream I0329 21:10:34.020871 6 log.go:172] (0xc002a7f600) (0xc001e94000) Stream added, broadcasting: 5 I0329 21:10:34.021794 6 log.go:172] (0xc002a7f600) Reply frame received for 5 I0329 21:10:34.112423 6 log.go:172] (0xc002a7f600) Data frame received for 3 I0329 21:10:34.112471 6 log.go:172] (0xc00183c280) (3) Data frame handling I0329 21:10:34.112493 6 log.go:172] (0xc00183c280) (3) Data frame sent I0329 21:10:34.113073 6 log.go:172] (0xc002a7f600) Data frame received for 3 I0329 21:10:34.113246 6 log.go:172] (0xc002a7f600) Data frame received for 5 I0329 21:10:34.113290 6 log.go:172] (0xc001e94000) (5) Data frame handling I0329 21:10:34.113335 6 log.go:172] (0xc00183c280) (3) Data frame handling I0329 21:10:34.114899 6 log.go:172] (0xc002a7f600) Data frame received for 1 I0329 21:10:34.114924 6 log.go:172] (0xc001f006e0) (1) Data frame handling I0329 21:10:34.114944 6 log.go:172] (0xc001f006e0) (1) Data frame sent I0329 21:10:34.114961 6 log.go:172] (0xc002a7f600) (0xc001f006e0) Stream removed, broadcasting: 1 I0329 21:10:34.115056 6 log.go:172] (0xc002a7f600) Go away received I0329 21:10:34.115135 6 log.go:172] (0xc002a7f600) (0xc001f006e0) Stream removed, broadcasting: 1 I0329 21:10:34.115185 6 log.go:172] (0xc002a7f600) (0xc00183c280) Stream removed, broadcasting: 3 I0329 21:10:34.115205 6 log.go:172] (0xc002a7f600) (0xc001e94000) Stream removed, broadcasting: 5 Mar 29 21:10:34.115: INFO: Waiting for responses: map[] Mar 29 21:10:34.119: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.221:8080/dial?request=hostname&protocol=udp&host=10.244.2.220&port=8081&tries=1'] Namespace:pod-network-test-884 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:10:34.119: INFO: >>> kubeConfig: /root/.kube/config I0329 21:10:34.147682 6 log.go:172] (0xc002c04580) (0xc002350500) Create stream I0329 21:10:34.147725 6 log.go:172] (0xc002c04580) (0xc002350500) Stream added, broadcasting: 1 I0329 21:10:34.150666 6 log.go:172] (0xc002c04580) Reply frame received for 1 I0329 21:10:34.150699 6 log.go:172] (0xc002c04580) (0xc001e940a0) Create stream I0329 21:10:34.150708 6 log.go:172] (0xc002c04580) (0xc001e940a0) Stream added, broadcasting: 3 I0329 21:10:34.151770 6 log.go:172] (0xc002c04580) Reply frame received for 3 I0329 21:10:34.151826 6 log.go:172] (0xc002c04580) (0xc001e941e0) Create stream I0329 21:10:34.151846 6 log.go:172] (0xc002c04580) (0xc001e941e0) Stream added, broadcasting: 5 I0329 21:10:34.152623 6 log.go:172] (0xc002c04580) Reply frame received for 5 I0329 21:10:34.211376 6 log.go:172] (0xc002c04580) Data frame received for 3 I0329 21:10:34.211416 6 log.go:172] (0xc001e940a0) (3) Data frame handling I0329 21:10:34.211445 6 log.go:172] (0xc001e940a0) (3) Data frame sent I0329 21:10:34.212173 6 log.go:172] (0xc002c04580) Data frame received for 3 I0329 21:10:34.212200 6 log.go:172] (0xc001e940a0) (3) Data frame handling I0329 21:10:34.212220 6 log.go:172] (0xc002c04580) Data frame received for 5 I0329 21:10:34.212233 6 log.go:172] (0xc001e941e0) (5) Data frame handling I0329 21:10:34.213707 6 log.go:172] (0xc002c04580) Data frame received for 1 I0329 21:10:34.213746 6 log.go:172] (0xc002350500) (1) Data frame handling I0329 21:10:34.213771 6 log.go:172] (0xc002350500) (1) Data frame sent I0329 21:10:34.213798 6 log.go:172] (0xc002c04580) (0xc002350500) Stream removed, broadcasting: 1 I0329 21:10:34.213828 6 log.go:172] (0xc002c04580) Go away received I0329 21:10:34.213952 6 log.go:172] (0xc002c04580) (0xc002350500) Stream removed, broadcasting: 1 I0329 21:10:34.213969 6 log.go:172] (0xc002c04580) (0xc001e940a0) Stream removed, broadcasting: 3 I0329 21:10:34.213975 6 log.go:172] (0xc002c04580) (0xc001e941e0) Stream removed, broadcasting: 5 Mar 29 21:10:34.214: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:10:34.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-884" for this suite. • [SLOW TEST:22.416 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":376,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:10:34.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:10:34.772: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:10:36.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113034, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113034, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113034, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113034, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:10:39.810: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:10:39.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8538-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:10:40.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6622" for this suite. STEP: Destroying namespace "webhook-6622-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.647 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":22,"skipped":379,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:10:40.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-94c473d3-5eb6-48be-ba46-d50b42f0b7bf STEP: Creating a pod to test consume secrets Mar 29 21:10:40.987: INFO: Waiting up to 5m0s for pod "pod-secrets-ffc7e269-4348-4df1-87f2-fe89f44cabaa" in namespace "secrets-7873" to be "success or failure" Mar 29 21:10:40.991: INFO: Pod "pod-secrets-ffc7e269-4348-4df1-87f2-fe89f44cabaa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163731ms Mar 29 21:10:42.994: INFO: Pod "pod-secrets-ffc7e269-4348-4df1-87f2-fe89f44cabaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007626268s Mar 29 21:10:45.006: INFO: Pod "pod-secrets-ffc7e269-4348-4df1-87f2-fe89f44cabaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019844519s STEP: Saw pod success Mar 29 21:10:45.007: INFO: Pod "pod-secrets-ffc7e269-4348-4df1-87f2-fe89f44cabaa" satisfied condition "success or failure" Mar 29 21:10:45.010: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-ffc7e269-4348-4df1-87f2-fe89f44cabaa container secret-env-test: STEP: delete the pod Mar 29 21:10:45.032: INFO: Waiting for pod pod-secrets-ffc7e269-4348-4df1-87f2-fe89f44cabaa to disappear Mar 29 21:10:45.077: INFO: Pod pod-secrets-ffc7e269-4348-4df1-87f2-fe89f44cabaa no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:10:45.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7873" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":396,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:10:45.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:10:56.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5666" for this suite. • [SLOW TEST:11.132 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":24,"skipped":408,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:10:56.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9702 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 29 21:10:56.260: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 29 21:11:14.400: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.159:8080/dial?request=hostname&protocol=http&host=10.244.1.158&port=8080&tries=1'] Namespace:pod-network-test-9702 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:11:14.400: INFO: >>> kubeConfig: /root/.kube/config I0329 21:11:14.431906 6 log.go:172] (0xc00157aa50) (0xc0024c70e0) Create stream I0329 21:11:14.431932 6 log.go:172] (0xc00157aa50) (0xc0024c70e0) Stream added, broadcasting: 1 I0329 21:11:14.434567 6 log.go:172] (0xc00157aa50) Reply frame received for 1 I0329 21:11:14.434610 6 log.go:172] (0xc00157aa50) (0xc001f008c0) Create stream I0329 21:11:14.434626 6 log.go:172] (0xc00157aa50) (0xc001f008c0) Stream added, broadcasting: 3 I0329 21:11:14.435669 6 log.go:172] (0xc00157aa50) Reply frame received for 3 I0329 21:11:14.435708 6 log.go:172] (0xc00157aa50) (0xc0024c7180) Create stream I0329 21:11:14.435723 6 log.go:172] (0xc00157aa50) (0xc0024c7180) Stream added, broadcasting: 5 I0329 21:11:14.436589 6 log.go:172] (0xc00157aa50) Reply frame received for 5 I0329 21:11:14.534760 6 log.go:172] (0xc00157aa50) Data frame received for 3 I0329 21:11:14.534813 6 log.go:172] (0xc001f008c0) (3) Data frame handling I0329 21:11:14.534850 6 log.go:172] (0xc001f008c0) (3) Data frame sent I0329 21:11:14.535433 6 log.go:172] (0xc00157aa50) Data frame received for 3 I0329 21:11:14.535462 6 log.go:172] (0xc001f008c0) (3) Data frame handling I0329 21:11:14.535750 6 log.go:172] (0xc00157aa50) Data frame received for 5 I0329 21:11:14.535775 6 log.go:172] (0xc0024c7180) (5) Data frame handling I0329 21:11:14.537419 6 log.go:172] (0xc00157aa50) Data frame received for 1 I0329 21:11:14.537454 6 log.go:172] (0xc0024c70e0) (1) Data frame handling I0329 21:11:14.537488 6 log.go:172] (0xc0024c70e0) (1) Data frame sent I0329 21:11:14.537733 6 log.go:172] (0xc00157aa50) (0xc0024c70e0) Stream removed, broadcasting: 1 I0329 21:11:14.537857 6 log.go:172] (0xc00157aa50) (0xc0024c70e0) Stream removed, broadcasting: 1 I0329 21:11:14.537904 6 log.go:172] (0xc00157aa50) (0xc001f008c0) Stream removed, broadcasting: 3 I0329 21:11:14.537931 6 log.go:172] (0xc00157aa50) (0xc0024c7180) Stream removed, broadcasting: 5 I0329 21:11:14.537966 6 log.go:172] (0xc00157aa50) Go away received Mar 29 21:11:14.537: INFO: Waiting for responses: map[] Mar 29 21:11:14.541: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.159:8080/dial?request=hostname&protocol=http&host=10.244.2.223&port=8080&tries=1'] Namespace:pod-network-test-9702 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:11:14.541: INFO: >>> kubeConfig: /root/.kube/config I0329 21:11:14.576567 6 log.go:172] (0xc001426580) (0xc001e948c0) Create stream I0329 21:11:14.576593 6 log.go:172] (0xc001426580) (0xc001e948c0) Stream added, broadcasting: 1 I0329 21:11:14.579900 6 log.go:172] (0xc001426580) Reply frame received for 1 I0329 21:11:14.579949 6 log.go:172] (0xc001426580) (0xc001e94960) Create stream I0329 21:11:14.579969 6 log.go:172] (0xc001426580) (0xc001e94960) Stream added, broadcasting: 3 I0329 21:11:14.581014 6 log.go:172] (0xc001426580) Reply frame received for 3 I0329 21:11:14.581053 6 log.go:172] (0xc001426580) (0xc002a78fa0) Create stream I0329 21:11:14.581069 6 log.go:172] (0xc001426580) (0xc002a78fa0) Stream added, broadcasting: 5 I0329 21:11:14.582387 6 log.go:172] (0xc001426580) Reply frame received for 5 I0329 21:11:14.652614 6 log.go:172] (0xc001426580) Data frame received for 3 I0329 21:11:14.652659 6 log.go:172] (0xc001e94960) (3) Data frame handling I0329 21:11:14.652684 6 log.go:172] (0xc001e94960) (3) Data frame sent I0329 21:11:14.653098 6 log.go:172] (0xc001426580) Data frame received for 3 I0329 21:11:14.653265 6 log.go:172] (0xc001e94960) (3) Data frame handling I0329 21:11:14.653308 6 log.go:172] (0xc001426580) Data frame received for 5 I0329 21:11:14.653350 6 log.go:172] (0xc002a78fa0) (5) Data frame handling I0329 21:11:14.655412 6 log.go:172] (0xc001426580) Data frame received for 1 I0329 21:11:14.655430 6 log.go:172] (0xc001e948c0) (1) Data frame handling I0329 21:11:14.655444 6 log.go:172] (0xc001e948c0) (1) Data frame sent I0329 21:11:14.655454 6 log.go:172] (0xc001426580) (0xc001e948c0) Stream removed, broadcasting: 1 I0329 21:11:14.655524 6 log.go:172] (0xc001426580) (0xc001e948c0) Stream removed, broadcasting: 1 I0329 21:11:14.655546 6 log.go:172] (0xc001426580) (0xc001e94960) Stream removed, broadcasting: 3 I0329 21:11:14.655563 6 log.go:172] (0xc001426580) (0xc002a78fa0) Stream removed, broadcasting: 5 I0329 21:11:14.655617 6 log.go:172] (0xc001426580) Go away received Mar 29 21:11:14.655: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:11:14.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9702" for this suite. • [SLOW TEST:18.444 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":417,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:11:14.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0329 21:11:26.228108 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 29 21:11:26.228: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:11:26.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5704" for this suite. • [SLOW TEST:11.571 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":26,"skipped":423,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:11:26.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 29 21:11:26.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7735 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 29 21:11:28.927: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0329 21:11:28.857834 145 log.go:172] (0xc0000f66e0) (0xc0006819a0) Create stream\nI0329 21:11:28.857891 145 log.go:172] (0xc0000f66e0) (0xc0006819a0) Stream added, broadcasting: 1\nI0329 21:11:28.860170 145 log.go:172] (0xc0000f66e0) Reply frame received for 1\nI0329 21:11:28.860241 145 log.go:172] (0xc0000f66e0) (0xc0007592c0) Create stream\nI0329 21:11:28.860286 145 log.go:172] (0xc0000f66e0) (0xc0007592c0) Stream added, broadcasting: 3\nI0329 21:11:28.861431 145 log.go:172] (0xc0000f66e0) Reply frame received for 3\nI0329 21:11:28.861490 145 log.go:172] (0xc0000f66e0) (0xc00082c000) Create stream\nI0329 21:11:28.861517 145 log.go:172] (0xc0000f66e0) (0xc00082c000) Stream added, broadcasting: 5\nI0329 21:11:28.862417 145 log.go:172] (0xc0000f66e0) Reply frame received for 5\nI0329 21:11:28.862449 145 log.go:172] (0xc0000f66e0) (0xc000681a40) Create stream\nI0329 21:11:28.862460 145 log.go:172] (0xc0000f66e0) (0xc000681a40) Stream added, broadcasting: 7\nI0329 21:11:28.863420 145 log.go:172] (0xc0000f66e0) Reply frame received for 7\nI0329 21:11:28.863598 145 log.go:172] (0xc0007592c0) (3) Writing data frame\nI0329 21:11:28.863729 145 log.go:172] (0xc0007592c0) (3) Writing data frame\nI0329 21:11:28.864425 145 log.go:172] (0xc0000f66e0) Data frame received for 5\nI0329 21:11:28.864439 145 log.go:172] (0xc00082c000) (5) Data frame handling\nI0329 21:11:28.864448 145 log.go:172] (0xc00082c000) (5) Data frame sent\nI0329 21:11:28.865564 145 log.go:172] (0xc0000f66e0) Data frame received for 5\nI0329 21:11:28.865577 145 log.go:172] (0xc00082c000) (5) Data frame handling\nI0329 21:11:28.865594 145 log.go:172] (0xc00082c000) (5) Data frame sent\nI0329 21:11:28.899977 145 log.go:172] (0xc0000f66e0) Data frame received for 7\nI0329 21:11:28.900014 145 log.go:172] (0xc000681a40) (7) Data frame handling\nI0329 21:11:28.900040 145 log.go:172] (0xc0000f66e0) Data frame received for 5\nI0329 21:11:28.900059 145 log.go:172] (0xc00082c000) (5) Data frame handling\nI0329 21:11:28.900597 145 log.go:172] (0xc0000f66e0) Data frame received for 1\nI0329 21:11:28.900617 145 log.go:172] (0xc0006819a0) (1) Data frame handling\nI0329 21:11:28.900638 145 log.go:172] (0xc0006819a0) (1) Data frame sent\nI0329 21:11:28.901423 145 log.go:172] (0xc0000f66e0) (0xc0006819a0) Stream removed, broadcasting: 1\nI0329 21:11:28.901755 145 log.go:172] (0xc0000f66e0) (0xc0006819a0) Stream removed, broadcasting: 1\nI0329 21:11:28.901777 145 log.go:172] (0xc0000f66e0) (0xc0007592c0) Stream removed, broadcasting: 3\nI0329 21:11:28.901790 145 log.go:172] (0xc0000f66e0) (0xc00082c000) Stream removed, broadcasting: 5\nI0329 21:11:28.901802 145 log.go:172] (0xc0000f66e0) (0xc000681a40) Stream removed, broadcasting: 7\nI0329 21:11:28.901848 145 log.go:172] (0xc0000f66e0) Go away received\n" Mar 29 21:11:28.927: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:11:30.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7735" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":27,"skipped":439,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:11:30.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-ed210434-dba1-42a0-9d71-b68a59c4f2cd STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ed210434-dba1-42a0-9d71-b68a59c4f2cd STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:11:39.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5959" for this suite. • [SLOW TEST:8.181 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":456,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:11:39.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:11:39.231: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 29 21:11:39.244: INFO: Number of nodes with available pods: 0 Mar 29 21:11:39.244: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 29 21:11:39.324: INFO: Number of nodes with available pods: 0 Mar 29 21:11:39.324: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:40.361: INFO: Number of nodes with available pods: 0 Mar 29 21:11:40.361: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:41.328: INFO: Number of nodes with available pods: 0 Mar 29 21:11:41.328: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:42.329: INFO: Number of nodes with available pods: 1 Mar 29 21:11:42.329: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 29 21:11:42.403: INFO: Number of nodes with available pods: 1 Mar 29 21:11:42.403: INFO: Number of running nodes: 0, number of available pods: 1 Mar 29 21:11:43.407: INFO: Number of nodes with available pods: 0 Mar 29 21:11:43.407: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 29 21:11:43.414: INFO: Number of nodes with available pods: 0 Mar 29 21:11:43.415: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:44.493: INFO: Number of nodes with available pods: 0 Mar 29 21:11:44.493: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:45.419: INFO: Number of nodes with available pods: 0 Mar 29 21:11:45.419: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:46.419: INFO: Number of nodes with available pods: 0 Mar 29 21:11:46.419: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:47.419: INFO: Number of nodes with available pods: 0 Mar 29 21:11:47.419: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:48.419: INFO: Number of nodes with available pods: 0 Mar 29 21:11:48.419: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:49.419: INFO: Number of nodes with available pods: 0 Mar 29 21:11:49.419: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:50.418: INFO: Number of nodes with available pods: 0 Mar 29 21:11:50.418: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:51.433: INFO: Number of nodes with available pods: 0 Mar 29 21:11:51.433: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:11:52.419: INFO: Number of nodes with available pods: 1 Mar 29 21:11:52.419: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-915, will wait for the garbage collector to delete the pods Mar 29 21:11:52.484: INFO: Deleting DaemonSet.extensions daemon-set took: 6.353017ms Mar 29 21:11:52.784: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.275094ms Mar 29 21:11:59.287: INFO: Number of nodes with available pods: 0 Mar 29 21:11:59.287: INFO: Number of running nodes: 0, number of available pods: 0 Mar 29 21:11:59.293: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-915/daemonsets","resourceVersion":"3782914"},"items":null} Mar 29 21:11:59.296: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-915/pods","resourceVersion":"3782914"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:11:59.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-915" for this suite. • [SLOW TEST:20.264 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":29,"skipped":462,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:11:59.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-37689298-8f4e-436e-ba00-e460d5f4d32c STEP: Creating a pod to test consume secrets Mar 29 21:11:59.468: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fe2fc323-8d6d-4771-909f-3ad288edad6c" in namespace "projected-1558" to be "success or failure" Mar 29 21:11:59.547: INFO: Pod "pod-projected-secrets-fe2fc323-8d6d-4771-909f-3ad288edad6c": Phase="Pending", Reason="", readiness=false. Elapsed: 78.875934ms Mar 29 21:12:01.552: INFO: Pod "pod-projected-secrets-fe2fc323-8d6d-4771-909f-3ad288edad6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084269315s Mar 29 21:12:03.557: INFO: Pod "pod-projected-secrets-fe2fc323-8d6d-4771-909f-3ad288edad6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088791073s STEP: Saw pod success Mar 29 21:12:03.557: INFO: Pod "pod-projected-secrets-fe2fc323-8d6d-4771-909f-3ad288edad6c" satisfied condition "success or failure" Mar 29 21:12:03.560: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-fe2fc323-8d6d-4771-909f-3ad288edad6c container projected-secret-volume-test: STEP: delete the pod Mar 29 21:12:03.612: INFO: Waiting for pod pod-projected-secrets-fe2fc323-8d6d-4771-909f-3ad288edad6c to disappear Mar 29 21:12:03.622: INFO: Pod pod-projected-secrets-fe2fc323-8d6d-4771-909f-3ad288edad6c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:12:03.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1558" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:12:03.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 29 21:12:03.715: INFO: Waiting up to 5m0s for pod "pod-8ba5bd0a-f6bd-4828-8e6d-5eec0bf92f78" in namespace "emptydir-5261" to be "success or failure" Mar 29 21:12:03.718: INFO: Pod "pod-8ba5bd0a-f6bd-4828-8e6d-5eec0bf92f78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.456074ms Mar 29 21:12:05.722: INFO: Pod "pod-8ba5bd0a-f6bd-4828-8e6d-5eec0bf92f78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006314787s Mar 29 21:12:07.725: INFO: Pod "pod-8ba5bd0a-f6bd-4828-8e6d-5eec0bf92f78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010149084s STEP: Saw pod success Mar 29 21:12:07.725: INFO: Pod "pod-8ba5bd0a-f6bd-4828-8e6d-5eec0bf92f78" satisfied condition "success or failure" Mar 29 21:12:07.728: INFO: Trying to get logs from node jerma-worker2 pod pod-8ba5bd0a-f6bd-4828-8e6d-5eec0bf92f78 container test-container: STEP: delete the pod Mar 29 21:12:07.746: INFO: Waiting for pod pod-8ba5bd0a-f6bd-4828-8e6d-5eec0bf92f78 to disappear Mar 29 21:12:07.756: INFO: Pod pod-8ba5bd0a-f6bd-4828-8e6d-5eec0bf92f78 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:12:07.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5261" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:12:07.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:12:08.347: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:12:10.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113128, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113128, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113128, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113128, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:12:12.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113128, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113128, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113128, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113128, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:12:15.427: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:12:15.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4883-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:12:16.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5011" for this suite. STEP: Destroying namespace "webhook-5011-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.844 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":32,"skipped":585,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:12:16.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-cc6fab84-d7b4-4c7a-b334-70823eb39ed9 STEP: Creating configMap with name cm-test-opt-upd-cdccbc95-85d0-4794-8c16-3fa42e4af419 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cc6fab84-d7b4-4c7a-b334-70823eb39ed9 STEP: Updating configmap cm-test-opt-upd-cdccbc95-85d0-4794-8c16-3fa42e4af419 STEP: Creating configMap with name cm-test-opt-create-222a8942-22d4-4eb6-b73c-8227ec0a6fc2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:13:41.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3409" for this suite. • [SLOW TEST:84.578 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":598,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:13:41.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 29 21:13:41.253: INFO: Waiting up to 5m0s for pod "client-containers-f57f94db-a948-458f-a191-12e4100eb0bd" in namespace "containers-4902" to be "success or failure" Mar 29 21:13:41.264: INFO: Pod "client-containers-f57f94db-a948-458f-a191-12e4100eb0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.118488ms Mar 29 21:13:43.267: INFO: Pod "client-containers-f57f94db-a948-458f-a191-12e4100eb0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014506437s Mar 29 21:13:45.271: INFO: Pod "client-containers-f57f94db-a948-458f-a191-12e4100eb0bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017858664s STEP: Saw pod success Mar 29 21:13:45.271: INFO: Pod "client-containers-f57f94db-a948-458f-a191-12e4100eb0bd" satisfied condition "success or failure" Mar 29 21:13:45.274: INFO: Trying to get logs from node jerma-worker2 pod client-containers-f57f94db-a948-458f-a191-12e4100eb0bd container test-container: STEP: delete the pod Mar 29 21:13:45.326: INFO: Waiting for pod client-containers-f57f94db-a948-458f-a191-12e4100eb0bd to disappear Mar 29 21:13:45.329: INFO: Pod client-containers-f57f94db-a948-458f-a191-12e4100eb0bd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:13:45.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4902" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":612,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:13:45.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 29 21:13:45.430: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:13:45.450: INFO: Number of nodes with available pods: 0 Mar 29 21:13:45.450: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:13:46.591: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:13:46.635: INFO: Number of nodes with available pods: 0 Mar 29 21:13:46.635: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:13:47.481: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:13:47.484: INFO: Number of nodes with available pods: 0 Mar 29 21:13:47.484: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:13:48.519: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:13:48.522: INFO: Number of nodes with available pods: 0 Mar 29 21:13:48.522: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:13:49.472: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:13:49.495: INFO: Number of nodes with available pods: 2 Mar 29 21:13:49.495: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 29 21:13:49.544: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:13:49.558: INFO: Number of nodes with available pods: 2 Mar 29 21:13:49.559: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8179, will wait for the garbage collector to delete the pods Mar 29 21:13:50.706: INFO: Deleting DaemonSet.extensions daemon-set took: 64.975531ms Mar 29 21:13:51.007: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.247333ms Mar 29 21:13:59.610: INFO: Number of nodes with available pods: 0 Mar 29 21:13:59.610: INFO: Number of running nodes: 0, number of available pods: 0 Mar 29 21:13:59.613: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8179/daemonsets","resourceVersion":"3783524"},"items":null} Mar 29 21:13:59.615: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8179/pods","resourceVersion":"3783524"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:13:59.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8179" for this suite. • [SLOW TEST:14.281 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":35,"skipped":614,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:13:59.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:14:15.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4861" for this suite. • [SLOW TEST:16.324 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":36,"skipped":625,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:14:15.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 29 21:14:26.154: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2686 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:14:26.154: INFO: >>> kubeConfig: /root/.kube/config I0329 21:14:26.187790 6 log.go:172] (0xc0010862c0) (0xc001b2f680) Create stream I0329 21:14:26.187832 6 log.go:172] (0xc0010862c0) (0xc001b2f680) Stream added, broadcasting: 1 I0329 21:14:26.190538 6 log.go:172] (0xc0010862c0) Reply frame received for 1 I0329 21:14:26.190582 6 log.go:172] (0xc0010862c0) (0xc001b2f720) Create stream I0329 21:14:26.190605 6 log.go:172] (0xc0010862c0) (0xc001b2f720) Stream added, broadcasting: 3 I0329 21:14:26.191554 6 log.go:172] (0xc0010862c0) Reply frame received for 3 I0329 21:14:26.191579 6 log.go:172] (0xc0010862c0) (0xc001fc61e0) Create stream I0329 21:14:26.191591 6 log.go:172] (0xc0010862c0) (0xc001fc61e0) Stream added, broadcasting: 5 I0329 21:14:26.192910 6 log.go:172] (0xc0010862c0) Reply frame received for 5 I0329 21:14:26.277871 6 log.go:172] (0xc0010862c0) Data frame received for 3 I0329 21:14:26.277932 6 log.go:172] (0xc001b2f720) (3) Data frame handling I0329 21:14:26.277961 6 log.go:172] (0xc001b2f720) (3) Data frame sent I0329 21:14:26.277976 6 log.go:172] (0xc0010862c0) Data frame received for 3 I0329 21:14:26.278013 6 log.go:172] (0xc001b2f720) (3) Data frame handling I0329 21:14:26.278041 6 log.go:172] (0xc0010862c0) Data frame received for 5 I0329 21:14:26.278056 6 log.go:172] (0xc001fc61e0) (5) Data frame handling I0329 21:14:26.279597 6 log.go:172] (0xc0010862c0) Data frame received for 1 I0329 21:14:26.279620 6 log.go:172] (0xc001b2f680) (1) Data frame handling I0329 21:14:26.279641 6 log.go:172] (0xc001b2f680) (1) Data frame sent I0329 21:14:26.279661 6 log.go:172] (0xc0010862c0) (0xc001b2f680) Stream removed, broadcasting: 1 I0329 21:14:26.279741 6 log.go:172] (0xc0010862c0) (0xc001b2f680) Stream removed, broadcasting: 1 I0329 21:14:26.279759 6 log.go:172] (0xc0010862c0) (0xc001b2f720) Stream removed, broadcasting: 3 I0329 21:14:26.279975 6 log.go:172] (0xc0010862c0) (0xc001fc61e0) Stream removed, broadcasting: 5 I0329 21:14:26.280105 6 log.go:172] (0xc0010862c0) Go away received Mar 29 21:14:26.280: INFO: Exec stderr: "" Mar 29 21:14:26.280: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2686 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:14:26.280: INFO: >>> kubeConfig: /root/.kube/config I0329 21:14:26.315131 6 log.go:172] (0xc00157a420) (0xc002a78be0) Create stream I0329 21:14:26.315155 6 log.go:172] (0xc00157a420) (0xc002a78be0) Stream added, broadcasting: 1 I0329 21:14:26.316982 6 log.go:172] (0xc00157a420) Reply frame received for 1 I0329 21:14:26.317005 6 log.go:172] (0xc00157a420) (0xc001fc6280) Create stream I0329 21:14:26.317013 6 log.go:172] (0xc00157a420) (0xc001fc6280) Stream added, broadcasting: 3 I0329 21:14:26.318029 6 log.go:172] (0xc00157a420) Reply frame received for 3 I0329 21:14:26.318067 6 log.go:172] (0xc00157a420) (0xc00183d680) Create stream I0329 21:14:26.318081 6 log.go:172] (0xc00157a420) (0xc00183d680) Stream added, broadcasting: 5 I0329 21:14:26.318865 6 log.go:172] (0xc00157a420) Reply frame received for 5 I0329 21:14:26.379533 6 log.go:172] (0xc00157a420) Data frame received for 5 I0329 21:14:26.379566 6 log.go:172] (0xc00183d680) (5) Data frame handling I0329 21:14:26.379583 6 log.go:172] (0xc00157a420) Data frame received for 3 I0329 21:14:26.379589 6 log.go:172] (0xc001fc6280) (3) Data frame handling I0329 21:14:26.379598 6 log.go:172] (0xc001fc6280) (3) Data frame sent I0329 21:14:26.379610 6 log.go:172] (0xc00157a420) Data frame received for 3 I0329 21:14:26.379614 6 log.go:172] (0xc001fc6280) (3) Data frame handling I0329 21:14:26.381411 6 log.go:172] (0xc00157a420) Data frame received for 1 I0329 21:14:26.381449 6 log.go:172] (0xc002a78be0) (1) Data frame handling I0329 21:14:26.381486 6 log.go:172] (0xc002a78be0) (1) Data frame sent I0329 21:14:26.381511 6 log.go:172] (0xc00157a420) (0xc002a78be0) Stream removed, broadcasting: 1 I0329 21:14:26.381578 6 log.go:172] (0xc00157a420) Go away received I0329 21:14:26.381603 6 log.go:172] (0xc00157a420) (0xc002a78be0) Stream removed, broadcasting: 1 I0329 21:14:26.381640 6 log.go:172] (0xc00157a420) (0xc001fc6280) Stream removed, broadcasting: 3 I0329 21:14:26.381660 6 log.go:172] (0xc00157a420) (0xc00183d680) Stream removed, broadcasting: 5 Mar 29 21:14:26.381: INFO: Exec stderr: "" Mar 29 21:14:26.381: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2686 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:14:26.381: INFO: >>> kubeConfig: /root/.kube/config I0329 21:14:26.415049 6 log.go:172] (0xc002c04790) (0xc001fc65a0) Create stream I0329 21:14:26.415074 6 log.go:172] (0xc002c04790) (0xc001fc65a0) Stream added, broadcasting: 1 I0329 21:14:26.417815 6 log.go:172] (0xc002c04790) Reply frame received for 1 I0329 21:14:26.418007 6 log.go:172] (0xc002c04790) (0xc001e94000) Create stream I0329 21:14:26.418036 6 log.go:172] (0xc002c04790) (0xc001e94000) Stream added, broadcasting: 3 I0329 21:14:26.419322 6 log.go:172] (0xc002c04790) Reply frame received for 3 I0329 21:14:26.419377 6 log.go:172] (0xc002c04790) (0xc002a78c80) Create stream I0329 21:14:26.419393 6 log.go:172] (0xc002c04790) (0xc002a78c80) Stream added, broadcasting: 5 I0329 21:14:26.420397 6 log.go:172] (0xc002c04790) Reply frame received for 5 I0329 21:14:26.480852 6 log.go:172] (0xc002c04790) Data frame received for 3 I0329 21:14:26.480881 6 log.go:172] (0xc001e94000) (3) Data frame handling I0329 21:14:26.480909 6 log.go:172] (0xc002c04790) Data frame received for 5 I0329 21:14:26.481051 6 log.go:172] (0xc002a78c80) (5) Data frame handling I0329 21:14:26.481080 6 log.go:172] (0xc001e94000) (3) Data frame sent I0329 21:14:26.481096 6 log.go:172] (0xc002c04790) Data frame received for 3 I0329 21:14:26.481219 6 log.go:172] (0xc001e94000) (3) Data frame handling I0329 21:14:26.481948 6 log.go:172] (0xc002c04790) Data frame received for 1 I0329 21:14:26.481962 6 log.go:172] (0xc001fc65a0) (1) Data frame handling I0329 21:14:26.481974 6 log.go:172] (0xc001fc65a0) (1) Data frame sent I0329 21:14:26.481988 6 log.go:172] (0xc002c04790) (0xc001fc65a0) Stream removed, broadcasting: 1 I0329 21:14:26.482011 6 log.go:172] (0xc002c04790) Go away received I0329 21:14:26.482078 6 log.go:172] (0xc002c04790) (0xc001fc65a0) Stream removed, broadcasting: 1 I0329 21:14:26.482111 6 log.go:172] (0xc002c04790) (0xc001e94000) Stream removed, broadcasting: 3 I0329 21:14:26.482120 6 log.go:172] (0xc002c04790) (0xc002a78c80) Stream removed, broadcasting: 5 Mar 29 21:14:26.482: INFO: Exec stderr: "" Mar 29 21:14:26.482: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2686 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:14:26.482: INFO: >>> kubeConfig: /root/.kube/config I0329 21:14:26.510763 6 log.go:172] (0xc002c04e70) (0xc001fc6780) Create stream I0329 21:14:26.510806 6 log.go:172] (0xc002c04e70) (0xc001fc6780) Stream added, broadcasting: 1 I0329 21:14:26.513309 6 log.go:172] (0xc002c04e70) Reply frame received for 1 I0329 21:14:26.513369 6 log.go:172] (0xc002c04e70) (0xc001fc6960) Create stream I0329 21:14:26.513396 6 log.go:172] (0xc002c04e70) (0xc001fc6960) Stream added, broadcasting: 3 I0329 21:14:26.514458 6 log.go:172] (0xc002c04e70) Reply frame received for 3 I0329 21:14:26.514505 6 log.go:172] (0xc002c04e70) (0xc001fc6aa0) Create stream I0329 21:14:26.514527 6 log.go:172] (0xc002c04e70) (0xc001fc6aa0) Stream added, broadcasting: 5 I0329 21:14:26.515336 6 log.go:172] (0xc002c04e70) Reply frame received for 5 I0329 21:14:26.593275 6 log.go:172] (0xc002c04e70) Data frame received for 5 I0329 21:14:26.593349 6 log.go:172] (0xc001fc6aa0) (5) Data frame handling I0329 21:14:26.593382 6 log.go:172] (0xc002c04e70) Data frame received for 3 I0329 21:14:26.593396 6 log.go:172] (0xc001fc6960) (3) Data frame handling I0329 21:14:26.593412 6 log.go:172] (0xc001fc6960) (3) Data frame sent I0329 21:14:26.593431 6 log.go:172] (0xc002c04e70) Data frame received for 3 I0329 21:14:26.593447 6 log.go:172] (0xc001fc6960) (3) Data frame handling I0329 21:14:26.595176 6 log.go:172] (0xc002c04e70) Data frame received for 1 I0329 21:14:26.595204 6 log.go:172] (0xc001fc6780) (1) Data frame handling I0329 21:14:26.595222 6 log.go:172] (0xc001fc6780) (1) Data frame sent I0329 21:14:26.595230 6 log.go:172] (0xc002c04e70) (0xc001fc6780) Stream removed, broadcasting: 1 I0329 21:14:26.595301 6 log.go:172] (0xc002c04e70) (0xc001fc6780) Stream removed, broadcasting: 1 I0329 21:14:26.595310 6 log.go:172] (0xc002c04e70) (0xc001fc6960) Stream removed, broadcasting: 3 I0329 21:14:26.595316 6 log.go:172] (0xc002c04e70) (0xc001fc6aa0) Stream removed, broadcasting: 5 Mar 29 21:14:26.595: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount I0329 21:14:26.595353 6 log.go:172] (0xc002c04e70) Go away received Mar 29 21:14:26.595: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2686 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:14:26.595: INFO: >>> kubeConfig: /root/.kube/config I0329 21:14:26.627442 6 log.go:172] (0xc0014264d0) (0xc001e94320) Create stream I0329 21:14:26.627469 6 log.go:172] (0xc0014264d0) (0xc001e94320) Stream added, broadcasting: 1 I0329 21:14:26.628986 6 log.go:172] (0xc0014264d0) Reply frame received for 1 I0329 21:14:26.629046 6 log.go:172] (0xc0014264d0) (0xc00183d720) Create stream I0329 21:14:26.629064 6 log.go:172] (0xc0014264d0) (0xc00183d720) Stream added, broadcasting: 3 I0329 21:14:26.630223 6 log.go:172] (0xc0014264d0) Reply frame received for 3 I0329 21:14:26.630262 6 log.go:172] (0xc0014264d0) (0xc00183d860) Create stream I0329 21:14:26.630278 6 log.go:172] (0xc0014264d0) (0xc00183d860) Stream added, broadcasting: 5 I0329 21:14:26.631166 6 log.go:172] (0xc0014264d0) Reply frame received for 5 I0329 21:14:26.688749 6 log.go:172] (0xc0014264d0) Data frame received for 3 I0329 21:14:26.688771 6 log.go:172] (0xc00183d720) (3) Data frame handling I0329 21:14:26.688779 6 log.go:172] (0xc00183d720) (3) Data frame sent I0329 21:14:26.688784 6 log.go:172] (0xc0014264d0) Data frame received for 3 I0329 21:14:26.688788 6 log.go:172] (0xc00183d720) (3) Data frame handling I0329 21:14:26.688817 6 log.go:172] (0xc0014264d0) Data frame received for 5 I0329 21:14:26.688857 6 log.go:172] (0xc00183d860) (5) Data frame handling I0329 21:14:26.690798 6 log.go:172] (0xc0014264d0) Data frame received for 1 I0329 21:14:26.690841 6 log.go:172] (0xc001e94320) (1) Data frame handling I0329 21:14:26.690868 6 log.go:172] (0xc001e94320) (1) Data frame sent I0329 21:14:26.690955 6 log.go:172] (0xc0014264d0) (0xc001e94320) Stream removed, broadcasting: 1 I0329 21:14:26.691093 6 log.go:172] (0xc0014264d0) Go away received I0329 21:14:26.691116 6 log.go:172] (0xc0014264d0) (0xc001e94320) Stream removed, broadcasting: 1 I0329 21:14:26.691127 6 log.go:172] (0xc0014264d0) (0xc00183d720) Stream removed, broadcasting: 3 I0329 21:14:26.691138 6 log.go:172] (0xc0014264d0) (0xc00183d860) Stream removed, broadcasting: 5 Mar 29 21:14:26.691: INFO: Exec stderr: "" Mar 29 21:14:26.691: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2686 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:14:26.691: INFO: >>> kubeConfig: /root/.kube/config I0329 21:14:26.718010 6 log.go:172] (0xc002a7f760) (0xc00183db80) Create stream I0329 21:14:26.718034 6 log.go:172] (0xc002a7f760) (0xc00183db80) Stream added, broadcasting: 1 I0329 21:14:26.719422 6 log.go:172] (0xc002a7f760) Reply frame received for 1 I0329 21:14:26.719461 6 log.go:172] (0xc002a7f760) (0xc001fc6be0) Create stream I0329 21:14:26.719470 6 log.go:172] (0xc002a7f760) (0xc001fc6be0) Stream added, broadcasting: 3 I0329 21:14:26.720206 6 log.go:172] (0xc002a7f760) Reply frame received for 3 I0329 21:14:26.720231 6 log.go:172] (0xc002a7f760) (0xc002a78e60) Create stream I0329 21:14:26.720242 6 log.go:172] (0xc002a7f760) (0xc002a78e60) Stream added, broadcasting: 5 I0329 21:14:26.720878 6 log.go:172] (0xc002a7f760) Reply frame received for 5 I0329 21:14:26.780704 6 log.go:172] (0xc002a7f760) Data frame received for 5 I0329 21:14:26.780743 6 log.go:172] (0xc002a78e60) (5) Data frame handling I0329 21:14:26.780773 6 log.go:172] (0xc002a7f760) Data frame received for 3 I0329 21:14:26.780789 6 log.go:172] (0xc001fc6be0) (3) Data frame handling I0329 21:14:26.780805 6 log.go:172] (0xc001fc6be0) (3) Data frame sent I0329 21:14:26.780818 6 log.go:172] (0xc002a7f760) Data frame received for 3 I0329 21:14:26.780839 6 log.go:172] (0xc001fc6be0) (3) Data frame handling I0329 21:14:26.782660 6 log.go:172] (0xc002a7f760) Data frame received for 1 I0329 21:14:26.782696 6 log.go:172] (0xc00183db80) (1) Data frame handling I0329 21:14:26.782796 6 log.go:172] (0xc00183db80) (1) Data frame sent I0329 21:14:26.782821 6 log.go:172] (0xc002a7f760) (0xc00183db80) Stream removed, broadcasting: 1 I0329 21:14:26.782844 6 log.go:172] (0xc002a7f760) Go away received I0329 21:14:26.782975 6 log.go:172] (0xc002a7f760) (0xc00183db80) Stream removed, broadcasting: 1 I0329 21:14:26.783012 6 log.go:172] (0xc002a7f760) (0xc001fc6be0) Stream removed, broadcasting: 3 I0329 21:14:26.783041 6 log.go:172] (0xc002a7f760) (0xc002a78e60) Stream removed, broadcasting: 5 Mar 29 21:14:26.783: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 29 21:14:26.783: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2686 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:14:26.783: INFO: >>> kubeConfig: /root/.kube/config I0329 21:14:26.819114 6 log.go:172] (0xc00157abb0) (0xc002a79040) Create stream I0329 21:14:26.819151 6 log.go:172] (0xc00157abb0) (0xc002a79040) Stream added, broadcasting: 1 I0329 21:14:26.821250 6 log.go:172] (0xc00157abb0) Reply frame received for 1 I0329 21:14:26.821317 6 log.go:172] (0xc00157abb0) (0xc001fc6dc0) Create stream I0329 21:14:26.821339 6 log.go:172] (0xc00157abb0) (0xc001fc6dc0) Stream added, broadcasting: 3 I0329 21:14:26.822520 6 log.go:172] (0xc00157abb0) Reply frame received for 3 I0329 21:14:26.822561 6 log.go:172] (0xc00157abb0) (0xc002a790e0) Create stream I0329 21:14:26.822576 6 log.go:172] (0xc00157abb0) (0xc002a790e0) Stream added, broadcasting: 5 I0329 21:14:26.824013 6 log.go:172] (0xc00157abb0) Reply frame received for 5 I0329 21:14:26.862122 6 log.go:172] (0xc00157abb0) Data frame received for 5 I0329 21:14:26.862146 6 log.go:172] (0xc002a790e0) (5) Data frame handling I0329 21:14:26.862166 6 log.go:172] (0xc00157abb0) Data frame received for 3 I0329 21:14:26.862184 6 log.go:172] (0xc001fc6dc0) (3) Data frame handling I0329 21:14:26.862197 6 log.go:172] (0xc001fc6dc0) (3) Data frame sent I0329 21:14:26.862213 6 log.go:172] (0xc00157abb0) Data frame received for 3 I0329 21:14:26.862222 6 log.go:172] (0xc001fc6dc0) (3) Data frame handling I0329 21:14:26.864037 6 log.go:172] (0xc00157abb0) Data frame received for 1 I0329 21:14:26.864059 6 log.go:172] (0xc002a79040) (1) Data frame handling I0329 21:14:26.864071 6 log.go:172] (0xc002a79040) (1) Data frame sent I0329 21:14:26.864086 6 log.go:172] (0xc00157abb0) (0xc002a79040) Stream removed, broadcasting: 1 I0329 21:14:26.864101 6 log.go:172] (0xc00157abb0) Go away received I0329 21:14:26.864297 6 log.go:172] (0xc00157abb0) (0xc002a79040) Stream removed, broadcasting: 1 I0329 21:14:26.864330 6 log.go:172] (0xc00157abb0) (0xc001fc6dc0) Stream removed, broadcasting: 3 I0329 21:14:26.864344 6 log.go:172] (0xc00157abb0) (0xc002a790e0) Stream removed, broadcasting: 5 Mar 29 21:14:26.864: INFO: Exec stderr: "" Mar 29 21:14:26.864: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2686 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:14:26.864: INFO: >>> kubeConfig: /root/.kube/config I0329 21:14:26.916788 6 log.go:172] (0xc001426b00) (0xc001e946e0) Create stream I0329 21:14:26.916811 6 log.go:172] (0xc001426b00) (0xc001e946e0) Stream added, broadcasting: 1 I0329 21:14:26.919087 6 log.go:172] (0xc001426b00) Reply frame received for 1 I0329 21:14:26.919123 6 log.go:172] (0xc001426b00) (0xc002a79180) Create stream I0329 21:14:26.919137 6 log.go:172] (0xc001426b00) (0xc002a79180) Stream added, broadcasting: 3 I0329 21:14:26.920095 6 log.go:172] (0xc001426b00) Reply frame received for 3 I0329 21:14:26.920158 6 log.go:172] (0xc001426b00) (0xc002a79220) Create stream I0329 21:14:26.920186 6 log.go:172] (0xc001426b00) (0xc002a79220) Stream added, broadcasting: 5 I0329 21:14:26.921279 6 log.go:172] (0xc001426b00) Reply frame received for 5 I0329 21:14:26.992742 6 log.go:172] (0xc001426b00) Data frame received for 5 I0329 21:14:26.992775 6 log.go:172] (0xc002a79220) (5) Data frame handling I0329 21:14:26.992797 6 log.go:172] (0xc001426b00) Data frame received for 3 I0329 21:14:26.992807 6 log.go:172] (0xc002a79180) (3) Data frame handling I0329 21:14:26.992818 6 log.go:172] (0xc002a79180) (3) Data frame sent I0329 21:14:26.993008 6 log.go:172] (0xc001426b00) Data frame received for 3 I0329 21:14:26.993022 6 log.go:172] (0xc002a79180) (3) Data frame handling I0329 21:14:26.994791 6 log.go:172] (0xc001426b00) Data frame received for 1 I0329 21:14:26.994828 6 log.go:172] (0xc001e946e0) (1) Data frame handling I0329 21:14:26.994849 6 log.go:172] (0xc001e946e0) (1) Data frame sent I0329 21:14:26.994881 6 log.go:172] (0xc001426b00) (0xc001e946e0) Stream removed, broadcasting: 1 I0329 21:14:26.994961 6 log.go:172] (0xc001426b00) Go away received I0329 21:14:26.994994 6 log.go:172] (0xc001426b00) (0xc001e946e0) Stream removed, broadcasting: 1 I0329 21:14:26.995015 6 log.go:172] (0xc001426b00) (0xc002a79180) Stream removed, broadcasting: 3 I0329 21:14:26.995028 6 log.go:172] (0xc001426b00) (0xc002a79220) Stream removed, broadcasting: 5 Mar 29 21:14:26.995: INFO: Exec stderr: "" Mar 29 21:14:26.995: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2686 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:14:26.995: INFO: >>> kubeConfig: /root/.kube/config I0329 21:14:27.034574 6 log.go:172] (0xc001427130) (0xc001e94960) Create stream I0329 21:14:27.034597 6 log.go:172] (0xc001427130) (0xc001e94960) Stream added, broadcasting: 1 I0329 21:14:27.037544 6 log.go:172] (0xc001427130) Reply frame received for 1 I0329 21:14:27.037594 6 log.go:172] (0xc001427130) (0xc001fc6e60) Create stream I0329 21:14:27.037610 6 log.go:172] (0xc001427130) (0xc001fc6e60) Stream added, broadcasting: 3 I0329 21:14:27.038599 6 log.go:172] (0xc001427130) Reply frame received for 3 I0329 21:14:27.038641 6 log.go:172] (0xc001427130) (0xc002a792c0) Create stream I0329 21:14:27.038654 6 log.go:172] (0xc001427130) (0xc002a792c0) Stream added, broadcasting: 5 I0329 21:14:27.039556 6 log.go:172] (0xc001427130) Reply frame received for 5 I0329 21:14:27.089305 6 log.go:172] (0xc001427130) Data frame received for 3 I0329 21:14:27.089351 6 log.go:172] (0xc001fc6e60) (3) Data frame handling I0329 21:14:27.089384 6 log.go:172] (0xc001fc6e60) (3) Data frame sent I0329 21:14:27.089403 6 log.go:172] (0xc001427130) Data frame received for 3 I0329 21:14:27.089418 6 log.go:172] (0xc001fc6e60) (3) Data frame handling I0329 21:14:27.089551 6 log.go:172] (0xc001427130) Data frame received for 5 I0329 21:14:27.089578 6 log.go:172] (0xc002a792c0) (5) Data frame handling I0329 21:14:27.091202 6 log.go:172] (0xc001427130) Data frame received for 1 I0329 21:14:27.091230 6 log.go:172] (0xc001e94960) (1) Data frame handling I0329 21:14:27.091251 6 log.go:172] (0xc001e94960) (1) Data frame sent I0329 21:14:27.091276 6 log.go:172] (0xc001427130) (0xc001e94960) Stream removed, broadcasting: 1 I0329 21:14:27.091302 6 log.go:172] (0xc001427130) Go away received I0329 21:14:27.091405 6 log.go:172] (0xc001427130) (0xc001e94960) Stream removed, broadcasting: 1 I0329 21:14:27.091432 6 log.go:172] (0xc001427130) (0xc001fc6e60) Stream removed, broadcasting: 3 I0329 21:14:27.091444 6 log.go:172] (0xc001427130) (0xc002a792c0) Stream removed, broadcasting: 5 Mar 29 21:14:27.091: INFO: Exec stderr: "" Mar 29 21:14:27.091: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2686 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:14:27.091: INFO: >>> kubeConfig: /root/.kube/config I0329 21:14:27.123721 6 log.go:172] (0xc00157b1e0) (0xc002a794a0) Create stream I0329 21:14:27.123749 6 log.go:172] (0xc00157b1e0) (0xc002a794a0) Stream added, broadcasting: 1 I0329 21:14:27.127147 6 log.go:172] (0xc00157b1e0) Reply frame received for 1 I0329 21:14:27.127195 6 log.go:172] (0xc00157b1e0) (0xc002a79540) Create stream I0329 21:14:27.127212 6 log.go:172] (0xc00157b1e0) (0xc002a79540) Stream added, broadcasting: 3 I0329 21:14:27.128316 6 log.go:172] (0xc00157b1e0) Reply frame received for 3 I0329 21:14:27.128352 6 log.go:172] (0xc00157b1e0) (0xc001e94a00) Create stream I0329 21:14:27.128365 6 log.go:172] (0xc00157b1e0) (0xc001e94a00) Stream added, broadcasting: 5 I0329 21:14:27.129447 6 log.go:172] (0xc00157b1e0) Reply frame received for 5 I0329 21:14:27.205685 6 log.go:172] (0xc00157b1e0) Data frame received for 5 I0329 21:14:27.205714 6 log.go:172] (0xc001e94a00) (5) Data frame handling I0329 21:14:27.205734 6 log.go:172] (0xc00157b1e0) Data frame received for 3 I0329 21:14:27.205745 6 log.go:172] (0xc002a79540) (3) Data frame handling I0329 21:14:27.205757 6 log.go:172] (0xc002a79540) (3) Data frame sent I0329 21:14:27.205767 6 log.go:172] (0xc00157b1e0) Data frame received for 3 I0329 21:14:27.205775 6 log.go:172] (0xc002a79540) (3) Data frame handling I0329 21:14:27.207338 6 log.go:172] (0xc00157b1e0) Data frame received for 1 I0329 21:14:27.207363 6 log.go:172] (0xc002a794a0) (1) Data frame handling I0329 21:14:27.207377 6 log.go:172] (0xc002a794a0) (1) Data frame sent I0329 21:14:27.207394 6 log.go:172] (0xc00157b1e0) (0xc002a794a0) Stream removed, broadcasting: 1 I0329 21:14:27.207449 6 log.go:172] (0xc00157b1e0) Go away received I0329 21:14:27.207509 6 log.go:172] (0xc00157b1e0) (0xc002a794a0) Stream removed, broadcasting: 1 I0329 21:14:27.207522 6 log.go:172] (0xc00157b1e0) (0xc002a79540) Stream removed, broadcasting: 3 I0329 21:14:27.207540 6 log.go:172] (0xc00157b1e0) (0xc001e94a00) Stream removed, broadcasting: 5 Mar 29 21:14:27.207: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:14:27.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2686" for this suite. • [SLOW TEST:11.257 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":638,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:14:27.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:14:27.357: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 29 21:14:32.363: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 29 21:14:32.363: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 29 21:14:36.418: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8870 /apis/apps/v1/namespaces/deployment-8870/deployments/test-cleanup-deployment 3a0d658d-1bc7-4e63-970a-3ee368e71f2e 3783803 1 2020-03-29 21:14:32 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001a84ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-29 21:14:32 +0000 UTC,LastTransitionTime:2020-03-29 21:14:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-03-29 21:14:35 +0000 UTC,LastTransitionTime:2020-03-29 21:14:32 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 29 21:14:36.422: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-8870 /apis/apps/v1/namespaces/deployment-8870/replicasets/test-cleanup-deployment-55ffc6b7b6 d8c295ab-bec7-44d2-9dd2-57cf80c06426 3783792 1 2020-03-29 21:14:32 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 3a0d658d-1bc7-4e63-970a-3ee368e71f2e 0xc001d97a17 0xc001d97a18}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d97a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 29 21:14:36.426: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-lnbpz" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-lnbpz test-cleanup-deployment-55ffc6b7b6- deployment-8870 /api/v1/namespaces/deployment-8870/pods/test-cleanup-deployment-55ffc6b7b6-lnbpz ccc0e4ab-b322-4841-810f-58b1bca3a68c 3783791 0 2020-03-29 21:14:32 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 d8c295ab-bec7-44d2-9dd2-57cf80c06426 0xc001d97e27 0xc001d97e28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cv6mk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cv6mk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cv6mk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:14:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:14:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:14:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:14:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.236,StartTime:2020-03-29 21:14:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:14:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://83cf0ceb1953ed92ecc822c9ca573185227e1ae1fd86dfe69e9530771300529d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:14:36.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8870" for this suite. • [SLOW TEST:9.218 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":38,"skipped":689,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:14:36.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 29 21:14:36.508: INFO: Waiting up to 5m0s for pod "var-expansion-879b150e-c276-46e8-a880-99a7c83646fa" in namespace "var-expansion-5333" to be "success or failure" Mar 29 21:14:36.511: INFO: Pod "var-expansion-879b150e-c276-46e8-a880-99a7c83646fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.702026ms Mar 29 21:14:38.530: INFO: Pod "var-expansion-879b150e-c276-46e8-a880-99a7c83646fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02120599s Mar 29 21:14:40.534: INFO: Pod "var-expansion-879b150e-c276-46e8-a880-99a7c83646fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025276577s STEP: Saw pod success Mar 29 21:14:40.534: INFO: Pod "var-expansion-879b150e-c276-46e8-a880-99a7c83646fa" satisfied condition "success or failure" Mar 29 21:14:40.536: INFO: Trying to get logs from node jerma-worker pod var-expansion-879b150e-c276-46e8-a880-99a7c83646fa container dapi-container: STEP: delete the pod Mar 29 21:14:40.555: INFO: Waiting for pod var-expansion-879b150e-c276-46e8-a880-99a7c83646fa to disappear Mar 29 21:14:40.559: INFO: Pod var-expansion-879b150e-c276-46e8-a880-99a7c83646fa no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:14:40.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5333" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:14:40.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-vsbz STEP: Creating a pod to test atomic-volume-subpath Mar 29 21:14:40.706: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vsbz" in namespace "subpath-679" to be "success or failure" Mar 29 21:14:40.715: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.269724ms Mar 29 21:14:42.753: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046846206s Mar 29 21:14:44.757: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Running", Reason="", readiness=true. Elapsed: 4.050948101s Mar 29 21:14:46.761: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Running", Reason="", readiness=true. Elapsed: 6.055268556s Mar 29 21:14:48.765: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Running", Reason="", readiness=true. Elapsed: 8.059535636s Mar 29 21:14:50.770: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Running", Reason="", readiness=true. Elapsed: 10.063649469s Mar 29 21:14:52.774: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Running", Reason="", readiness=true. Elapsed: 12.067557761s Mar 29 21:14:54.778: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Running", Reason="", readiness=true. Elapsed: 14.072207267s Mar 29 21:14:56.782: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Running", Reason="", readiness=true. Elapsed: 16.076022271s Mar 29 21:14:58.786: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Running", Reason="", readiness=true. Elapsed: 18.080305451s Mar 29 21:15:00.806: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Running", Reason="", readiness=true. Elapsed: 20.100533304s Mar 29 21:15:02.810: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Running", Reason="", readiness=true. Elapsed: 22.10405463s Mar 29 21:15:04.858: INFO: Pod "pod-subpath-test-downwardapi-vsbz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.151794316s STEP: Saw pod success Mar 29 21:15:04.858: INFO: Pod "pod-subpath-test-downwardapi-vsbz" satisfied condition "success or failure" Mar 29 21:15:04.861: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-vsbz container test-container-subpath-downwardapi-vsbz: STEP: delete the pod Mar 29 21:15:05.067: INFO: Waiting for pod pod-subpath-test-downwardapi-vsbz to disappear Mar 29 21:15:05.110: INFO: Pod pod-subpath-test-downwardapi-vsbz no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-vsbz Mar 29 21:15:05.110: INFO: Deleting pod "pod-subpath-test-downwardapi-vsbz" in namespace "subpath-679" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:05.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-679" for this suite. • [SLOW TEST:24.554 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":40,"skipped":719,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:05.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 29 21:15:05.251: INFO: Waiting up to 5m0s for pod "downward-api-6c771394-a309-4530-93fd-119117c71f6d" in namespace "downward-api-4141" to be "success or failure" Mar 29 21:15:05.267: INFO: Pod "downward-api-6c771394-a309-4530-93fd-119117c71f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.461954ms Mar 29 21:15:07.271: INFO: Pod "downward-api-6c771394-a309-4530-93fd-119117c71f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01998367s Mar 29 21:15:09.275: INFO: Pod "downward-api-6c771394-a309-4530-93fd-119117c71f6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024350806s STEP: Saw pod success Mar 29 21:15:09.276: INFO: Pod "downward-api-6c771394-a309-4530-93fd-119117c71f6d" satisfied condition "success or failure" Mar 29 21:15:09.279: INFO: Trying to get logs from node jerma-worker2 pod downward-api-6c771394-a309-4530-93fd-119117c71f6d container dapi-container: STEP: delete the pod Mar 29 21:15:09.304: INFO: Waiting for pod downward-api-6c771394-a309-4530-93fd-119117c71f6d to disappear Mar 29 21:15:09.308: INFO: Pod downward-api-6c771394-a309-4530-93fd-119117c71f6d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:09.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4141" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:09.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:15:09.388: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac881020-826d-4f01-9c57-565132ffb533" in namespace "downward-api-2722" to be "success or failure" Mar 29 21:15:09.410: INFO: Pod "downwardapi-volume-ac881020-826d-4f01-9c57-565132ffb533": Phase="Pending", Reason="", readiness=false. Elapsed: 21.849652ms Mar 29 21:15:11.414: INFO: Pod "downwardapi-volume-ac881020-826d-4f01-9c57-565132ffb533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025904146s Mar 29 21:15:13.417: INFO: Pod "downwardapi-volume-ac881020-826d-4f01-9c57-565132ffb533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029005529s STEP: Saw pod success Mar 29 21:15:13.417: INFO: Pod "downwardapi-volume-ac881020-826d-4f01-9c57-565132ffb533" satisfied condition "success or failure" Mar 29 21:15:13.419: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ac881020-826d-4f01-9c57-565132ffb533 container client-container: STEP: delete the pod Mar 29 21:15:13.485: INFO: Waiting for pod downwardapi-volume-ac881020-826d-4f01-9c57-565132ffb533 to disappear Mar 29 21:15:13.488: INFO: Pod downwardapi-volume-ac881020-826d-4f01-9c57-565132ffb533 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:13.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2722" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":759,"failed":0} SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:13.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 29 21:15:13.557: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-778" to be "success or failure" Mar 29 21:15:13.560: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.168043ms Mar 29 21:15:15.564: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006793532s Mar 29 21:15:17.568: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.011161065s Mar 29 21:15:19.572: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015315823s STEP: Saw pod success Mar 29 21:15:19.572: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 29 21:15:19.575: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 29 21:15:19.600: INFO: Waiting for pod pod-host-path-test to disappear Mar 29 21:15:19.619: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:19.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-778" for this suite. • [SLOW TEST:6.129 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":762,"failed":0} [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:19.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:19.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3984" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":44,"skipped":762,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:19.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 29 21:15:23.948: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 29 21:15:34.039: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:34.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1089" for this suite. • [SLOW TEST:14.292 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":45,"skipped":763,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:34.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-105a65d0-7e73-4a95-810d-6204d3137b9f STEP: Creating a pod to test consume configMaps Mar 29 21:15:34.138: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-866a5a47-1370-4895-905a-5c1e24d86796" in namespace "projected-8447" to be "success or failure" Mar 29 21:15:34.155: INFO: Pod "pod-projected-configmaps-866a5a47-1370-4895-905a-5c1e24d86796": Phase="Pending", Reason="", readiness=false. Elapsed: 17.72566ms Mar 29 21:15:36.160: INFO: Pod "pod-projected-configmaps-866a5a47-1370-4895-905a-5c1e24d86796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021967843s Mar 29 21:15:38.164: INFO: Pod "pod-projected-configmaps-866a5a47-1370-4895-905a-5c1e24d86796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026065638s STEP: Saw pod success Mar 29 21:15:38.164: INFO: Pod "pod-projected-configmaps-866a5a47-1370-4895-905a-5c1e24d86796" satisfied condition "success or failure" Mar 29 21:15:38.167: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-866a5a47-1370-4895-905a-5c1e24d86796 container projected-configmap-volume-test: STEP: delete the pod Mar 29 21:15:38.184: INFO: Waiting for pod pod-projected-configmaps-866a5a47-1370-4895-905a-5c1e24d86796 to disappear Mar 29 21:15:38.195: INFO: Pod pod-projected-configmaps-866a5a47-1370-4895-905a-5c1e24d86796 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8447" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":780,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:38.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 29 21:15:42.863: INFO: Successfully updated pod "pod-update-542d6b35-9caa-4fd1-b385-c3c5bd3dc43a" STEP: verifying the updated pod is in kubernetes Mar 29 21:15:42.887: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:42.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6727" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":788,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:42.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-3993/configmap-test-8326602d-542e-4a78-b054-93b3e5cad8da STEP: Creating a pod to test consume configMaps Mar 29 21:15:42.974: INFO: Waiting up to 5m0s for pod "pod-configmaps-98e03211-83ba-402c-80ca-4968bf0eb177" in namespace "configmap-3993" to be "success or failure" Mar 29 21:15:42.978: INFO: Pod "pod-configmaps-98e03211-83ba-402c-80ca-4968bf0eb177": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156225ms Mar 29 21:15:44.982: INFO: Pod "pod-configmaps-98e03211-83ba-402c-80ca-4968bf0eb177": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008240073s Mar 29 21:15:46.986: INFO: Pod "pod-configmaps-98e03211-83ba-402c-80ca-4968bf0eb177": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01193091s STEP: Saw pod success Mar 29 21:15:46.986: INFO: Pod "pod-configmaps-98e03211-83ba-402c-80ca-4968bf0eb177" satisfied condition "success or failure" Mar 29 21:15:46.988: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-98e03211-83ba-402c-80ca-4968bf0eb177 container env-test: STEP: delete the pod Mar 29 21:15:47.003: INFO: Waiting for pod pod-configmaps-98e03211-83ba-402c-80ca-4968bf0eb177 to disappear Mar 29 21:15:47.032: INFO: Pod pod-configmaps-98e03211-83ba-402c-80ca-4968bf0eb177 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:47.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3993" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":822,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:47.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 29 21:15:47.109: INFO: Waiting up to 5m0s for pod "pod-5f5548b8-2714-4fdd-abd3-e11f73411687" in namespace "emptydir-110" to be "success or failure" Mar 29 21:15:47.116: INFO: Pod "pod-5f5548b8-2714-4fdd-abd3-e11f73411687": Phase="Pending", Reason="", readiness=false. Elapsed: 7.181553ms Mar 29 21:15:49.119: INFO: Pod "pod-5f5548b8-2714-4fdd-abd3-e11f73411687": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01055743s Mar 29 21:15:51.123: INFO: Pod "pod-5f5548b8-2714-4fdd-abd3-e11f73411687": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014374245s STEP: Saw pod success Mar 29 21:15:51.123: INFO: Pod "pod-5f5548b8-2714-4fdd-abd3-e11f73411687" satisfied condition "success or failure" Mar 29 21:15:51.127: INFO: Trying to get logs from node jerma-worker pod pod-5f5548b8-2714-4fdd-abd3-e11f73411687 container test-container: STEP: delete the pod Mar 29 21:15:51.167: INFO: Waiting for pod pod-5f5548b8-2714-4fdd-abd3-e11f73411687 to disappear Mar 29 21:15:51.171: INFO: Pod pod-5f5548b8-2714-4fdd-abd3-e11f73411687 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:51.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-110" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":823,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:51.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 29 21:15:51.251: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:58.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2731" for this suite. • [SLOW TEST:7.313 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":50,"skipped":824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:58.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 29 21:15:58.569: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:15:58.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5367" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":51,"skipped":858,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:15:58.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 29 21:16:03.287: INFO: Successfully updated pod "labelsupdate9f670fbe-792d-41bf-a3b6-67c04657b70f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:16:05.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7552" for this suite. • [SLOW TEST:6.677 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":892,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:16:05.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 29 21:16:13.472: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 29 21:16:13.478: INFO: Pod pod-with-prestop-exec-hook still exists Mar 29 21:16:15.478: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 29 21:16:15.482: INFO: Pod pod-with-prestop-exec-hook still exists Mar 29 21:16:17.478: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 29 21:16:17.481: INFO: Pod pod-with-prestop-exec-hook still exists Mar 29 21:16:19.478: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 29 21:16:19.482: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:16:19.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2728" for this suite. • [SLOW TEST:14.162 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:16:19.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:16:19.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6298" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":54,"skipped":934,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:16:19.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 29 21:16:19.623: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 29 21:16:19.633: INFO: Waiting for terminating namespaces to be deleted... Mar 29 21:16:19.636: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 29 21:16:19.640: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:16:19.640: INFO: Container kindnet-cni ready: true, restart count 0 Mar 29 21:16:19.640: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:16:19.640: INFO: Container kube-proxy ready: true, restart count 0 Mar 29 21:16:19.640: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 29 21:16:19.646: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Mar 29 21:16:19.646: INFO: Container kube-hunter ready: false, restart count 0 Mar 29 21:16:19.646: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:16:19.646: INFO: Container kindnet-cni ready: true, restart count 0 Mar 29 21:16:19.646: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Mar 29 21:16:19.646: INFO: Container kube-bench ready: false, restart count 0 Mar 29 21:16:19.646: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:16:19.646: INFO: Container kube-proxy ready: true, restart count 0 Mar 29 21:16:19.646: INFO: pod-handle-http-request from container-lifecycle-hook-2728 started at 2020-03-29 21:16:05 +0000 UTC (1 container statuses recorded) Mar 29 21:16:19.646: INFO: Container pod-handle-http-request ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 29 21:16:19.758: INFO: Pod pod-handle-http-request requesting resource cpu=0m on Node jerma-worker2 Mar 29 21:16:19.758: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Mar 29 21:16:19.758: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Mar 29 21:16:19.758: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Mar 29 21:16:19.758: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 29 21:16:19.758: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 29 21:16:19.764: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-0182f53b-bb60-4493-9ef3-020a9a233b2b.1600e2edcb098804], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8098/filler-pod-0182f53b-bb60-4493-9ef3-020a9a233b2b to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-0182f53b-bb60-4493-9ef3-020a9a233b2b.1600e2ee11c68404], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-0182f53b-bb60-4493-9ef3-020a9a233b2b.1600e2ee54007d6d], Reason = [Created], Message = [Created container filler-pod-0182f53b-bb60-4493-9ef3-020a9a233b2b] STEP: Considering event: Type = [Normal], Name = [filler-pod-0182f53b-bb60-4493-9ef3-020a9a233b2b.1600e2ee6d1787af], Reason = [Started], Message = [Started container filler-pod-0182f53b-bb60-4493-9ef3-020a9a233b2b] STEP: Considering event: Type = [Normal], Name = [filler-pod-a7614642-13c5-4784-87eb-f5a24d924ac6.1600e2edcb67b1d5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8098/filler-pod-a7614642-13c5-4784-87eb-f5a24d924ac6 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a7614642-13c5-4784-87eb-f5a24d924ac6.1600e2ee42ed228a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a7614642-13c5-4784-87eb-f5a24d924ac6.1600e2ee78952990], Reason = [Created], Message = [Created container filler-pod-a7614642-13c5-4784-87eb-f5a24d924ac6] STEP: Considering event: Type = [Normal], Name = [filler-pod-a7614642-13c5-4784-87eb-f5a24d924ac6.1600e2ee87423ee2], Reason = [Started], Message = [Started container filler-pod-a7614642-13c5-4784-87eb-f5a24d924ac6] STEP: Considering event: Type = [Warning], Name = [additional-pod.1600e2eebc5f741a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:16:25.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8098" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.598 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":55,"skipped":959,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:16:25.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:16:25.630: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:16:27.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113385, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113385, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113385, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113385, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:16:29.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113385, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113385, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113385, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113385, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:16:32.683: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:16:32.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1648" for this suite. STEP: Destroying namespace "webhook-1648-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.698 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":56,"skipped":971,"failed":0} [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:16:32.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 29 21:16:32.928: INFO: Created pod &Pod{ObjectMeta:{dns-7299 dns-7299 /api/v1/namespaces/dns-7299/pods/dns-7299 0a425824-0f32-4a71-853e-b03e47e2f95f 3784694 0 2020-03-29 21:16:32 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qnfsm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qnfsm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qnfsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 29 21:16:36.954: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7299 PodName:dns-7299 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:16:36.954: INFO: >>> kubeConfig: /root/.kube/config I0329 21:16:36.989000 6 log.go:172] (0xc0028d2bb0) (0xc002a79f40) Create stream I0329 21:16:36.989040 6 log.go:172] (0xc0028d2bb0) (0xc002a79f40) Stream added, broadcasting: 1 I0329 21:16:36.990899 6 log.go:172] (0xc0028d2bb0) Reply frame received for 1 I0329 21:16:36.990951 6 log.go:172] (0xc0028d2bb0) (0xc001808000) Create stream I0329 21:16:36.990968 6 log.go:172] (0xc0028d2bb0) (0xc001808000) Stream added, broadcasting: 3 I0329 21:16:36.991903 6 log.go:172] (0xc0028d2bb0) Reply frame received for 3 I0329 21:16:36.991930 6 log.go:172] (0xc0028d2bb0) (0xc001b2fd60) Create stream I0329 21:16:36.991941 6 log.go:172] (0xc0028d2bb0) (0xc001b2fd60) Stream added, broadcasting: 5 I0329 21:16:36.992781 6 log.go:172] (0xc0028d2bb0) Reply frame received for 5 I0329 21:16:37.081512 6 log.go:172] (0xc0028d2bb0) Data frame received for 3 I0329 21:16:37.081551 6 log.go:172] (0xc001808000) (3) Data frame handling I0329 21:16:37.081582 6 log.go:172] (0xc001808000) (3) Data frame sent I0329 21:16:37.082276 6 log.go:172] (0xc0028d2bb0) Data frame received for 5 I0329 21:16:37.082298 6 log.go:172] (0xc001b2fd60) (5) Data frame handling I0329 21:16:37.082328 6 log.go:172] (0xc0028d2bb0) Data frame received for 3 I0329 21:16:37.082354 6 log.go:172] (0xc001808000) (3) Data frame handling I0329 21:16:37.084066 6 log.go:172] (0xc0028d2bb0) Data frame received for 1 I0329 21:16:37.084083 6 log.go:172] (0xc002a79f40) (1) Data frame handling I0329 21:16:37.084099 6 log.go:172] (0xc002a79f40) (1) Data frame sent I0329 21:16:37.084109 6 log.go:172] (0xc0028d2bb0) (0xc002a79f40) Stream removed, broadcasting: 1 I0329 21:16:37.084198 6 log.go:172] (0xc0028d2bb0) (0xc002a79f40) Stream removed, broadcasting: 1 I0329 21:16:37.084210 6 log.go:172] (0xc0028d2bb0) (0xc001808000) Stream removed, broadcasting: 3 I0329 21:16:37.084324 6 log.go:172] (0xc0028d2bb0) Go away received I0329 21:16:37.084388 6 log.go:172] (0xc0028d2bb0) (0xc001b2fd60) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 29 21:16:37.084: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7299 PodName:dns-7299 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:16:37.084: INFO: >>> kubeConfig: /root/.kube/config I0329 21:16:37.113428 6 log.go:172] (0xc0028d31e0) (0xc001bee140) Create stream I0329 21:16:37.113454 6 log.go:172] (0xc0028d31e0) (0xc001bee140) Stream added, broadcasting: 1 I0329 21:16:37.114915 6 log.go:172] (0xc0028d31e0) Reply frame received for 1 I0329 21:16:37.114941 6 log.go:172] (0xc0028d31e0) (0xc001bee320) Create stream I0329 21:16:37.114952 6 log.go:172] (0xc0028d31e0) (0xc001bee320) Stream added, broadcasting: 3 I0329 21:16:37.115985 6 log.go:172] (0xc0028d31e0) Reply frame received for 3 I0329 21:16:37.116017 6 log.go:172] (0xc0028d31e0) (0xc001b2fe00) Create stream I0329 21:16:37.116029 6 log.go:172] (0xc0028d31e0) (0xc001b2fe00) Stream added, broadcasting: 5 I0329 21:16:37.116895 6 log.go:172] (0xc0028d31e0) Reply frame received for 5 I0329 21:16:37.179951 6 log.go:172] (0xc0028d31e0) Data frame received for 3 I0329 21:16:37.179983 6 log.go:172] (0xc001bee320) (3) Data frame handling I0329 21:16:37.180021 6 log.go:172] (0xc001bee320) (3) Data frame sent I0329 21:16:37.180960 6 log.go:172] (0xc0028d31e0) Data frame received for 5 I0329 21:16:37.180989 6 log.go:172] (0xc001b2fe00) (5) Data frame handling I0329 21:16:37.181016 6 log.go:172] (0xc0028d31e0) Data frame received for 3 I0329 21:16:37.181035 6 log.go:172] (0xc001bee320) (3) Data frame handling I0329 21:16:37.182257 6 log.go:172] (0xc0028d31e0) Data frame received for 1 I0329 21:16:37.182288 6 log.go:172] (0xc001bee140) (1) Data frame handling I0329 21:16:37.182321 6 log.go:172] (0xc001bee140) (1) Data frame sent I0329 21:16:37.182335 6 log.go:172] (0xc0028d31e0) (0xc001bee140) Stream removed, broadcasting: 1 I0329 21:16:37.182351 6 log.go:172] (0xc0028d31e0) Go away received I0329 21:16:37.182460 6 log.go:172] (0xc0028d31e0) (0xc001bee140) Stream removed, broadcasting: 1 I0329 21:16:37.182489 6 log.go:172] (0xc0028d31e0) (0xc001bee320) Stream removed, broadcasting: 3 I0329 21:16:37.182497 6 log.go:172] (0xc0028d31e0) (0xc001b2fe00) Stream removed, broadcasting: 5 Mar 29 21:16:37.182: INFO: Deleting pod dns-7299... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:16:37.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7299" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":57,"skipped":971,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:16:37.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 29 21:16:37.738: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3758 /api/v1/namespaces/watch-3758/configmaps/e2e-watch-test-label-changed 93681412-a462-4817-bb3e-bcf28f94deb3 3784725 0 2020-03-29 21:16:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 29 21:16:37.738: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3758 /api/v1/namespaces/watch-3758/configmaps/e2e-watch-test-label-changed 93681412-a462-4817-bb3e-bcf28f94deb3 3784726 0 2020-03-29 21:16:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 29 21:16:37.738: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3758 /api/v1/namespaces/watch-3758/configmaps/e2e-watch-test-label-changed 93681412-a462-4817-bb3e-bcf28f94deb3 3784728 0 2020-03-29 21:16:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 29 21:16:47.791: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3758 /api/v1/namespaces/watch-3758/configmaps/e2e-watch-test-label-changed 93681412-a462-4817-bb3e-bcf28f94deb3 3784783 0 2020-03-29 21:16:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 29 21:16:47.792: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3758 /api/v1/namespaces/watch-3758/configmaps/e2e-watch-test-label-changed 93681412-a462-4817-bb3e-bcf28f94deb3 3784784 0 2020-03-29 21:16:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 29 21:16:47.792: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3758 /api/v1/namespaces/watch-3758/configmaps/e2e-watch-test-label-changed 93681412-a462-4817-bb3e-bcf28f94deb3 3784785 0 2020-03-29 21:16:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:16:47.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3758" for this suite. • [SLOW TEST:10.532 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":58,"skipped":996,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:16:47.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:16:47.899: INFO: Create a RollingUpdate DaemonSet Mar 29 21:16:47.902: INFO: Check that daemon pods launch on every node of the cluster Mar 29 21:16:47.905: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:16:47.910: INFO: Number of nodes with available pods: 0 Mar 29 21:16:47.910: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:16:48.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:16:48.920: INFO: Number of nodes with available pods: 0 Mar 29 21:16:48.921: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:16:49.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:16:49.918: INFO: Number of nodes with available pods: 0 Mar 29 21:16:49.918: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:16:50.947: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:16:50.951: INFO: Number of nodes with available pods: 1 Mar 29 21:16:50.951: INFO: Node jerma-worker2 is running more than one daemon pod Mar 29 21:16:51.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:16:51.919: INFO: Number of nodes with available pods: 2 Mar 29 21:16:51.919: INFO: Number of running nodes: 2, number of available pods: 2 Mar 29 21:16:51.919: INFO: Update the DaemonSet to trigger a rollout Mar 29 21:16:51.926: INFO: Updating DaemonSet daemon-set Mar 29 21:16:55.945: INFO: Roll back the DaemonSet before rollout is complete Mar 29 21:16:55.952: INFO: Updating DaemonSet daemon-set Mar 29 21:16:55.952: INFO: Make sure DaemonSet rollback is complete Mar 29 21:16:55.975: INFO: Wrong image for pod: daemon-set-gm8jx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 29 21:16:55.975: INFO: Pod daemon-set-gm8jx is not available Mar 29 21:16:55.994: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:16:56.998: INFO: Wrong image for pod: daemon-set-gm8jx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 29 21:16:56.999: INFO: Pod daemon-set-gm8jx is not available Mar 29 21:16:57.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:16:58.030: INFO: Pod daemon-set-gdbrj is not available Mar 29 21:16:58.051: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6884, will wait for the garbage collector to delete the pods Mar 29 21:16:58.114: INFO: Deleting DaemonSet.extensions daemon-set took: 5.526829ms Mar 29 21:16:58.415: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.217761ms Mar 29 21:17:09.329: INFO: Number of nodes with available pods: 0 Mar 29 21:17:09.329: INFO: Number of running nodes: 0, number of available pods: 0 Mar 29 21:17:09.331: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6884/daemonsets","resourceVersion":"3784931"},"items":null} Mar 29 21:17:09.334: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6884/pods","resourceVersion":"3784931"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:17:09.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6884" for this suite. • [SLOW TEST:21.542 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":59,"skipped":998,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:17:09.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 29 21:17:09.423: INFO: Waiting up to 5m0s for pod "client-containers-0270c682-107c-480b-a34f-ff4d8d7c9c2e" in namespace "containers-6464" to be "success or failure" Mar 29 21:17:09.449: INFO: Pod "client-containers-0270c682-107c-480b-a34f-ff4d8d7c9c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.253786ms Mar 29 21:17:11.453: INFO: Pod "client-containers-0270c682-107c-480b-a34f-ff4d8d7c9c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029851827s Mar 29 21:17:13.456: INFO: Pod "client-containers-0270c682-107c-480b-a34f-ff4d8d7c9c2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032510799s STEP: Saw pod success Mar 29 21:17:13.456: INFO: Pod "client-containers-0270c682-107c-480b-a34f-ff4d8d7c9c2e" satisfied condition "success or failure" Mar 29 21:17:13.458: INFO: Trying to get logs from node jerma-worker2 pod client-containers-0270c682-107c-480b-a34f-ff4d8d7c9c2e container test-container: STEP: delete the pod Mar 29 21:17:13.495: INFO: Waiting for pod client-containers-0270c682-107c-480b-a34f-ff4d8d7c9c2e to disappear Mar 29 21:17:13.527: INFO: Pod client-containers-0270c682-107c-480b-a34f-ff4d8d7c9c2e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:17:13.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6464" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1023,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:17:13.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4184.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4184.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 29 21:17:19.645: INFO: DNS probes using dns-test-0f1f59df-7c4f-42aa-8ec6-e340e25a447a succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4184.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4184.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 29 21:17:25.780: INFO: File wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local from pod dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 29 21:17:25.783: INFO: File jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local from pod dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 29 21:17:25.783: INFO: Lookups using dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 failed for: [wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local] Mar 29 21:17:30.787: INFO: File wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local from pod dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 29 21:17:30.790: INFO: File jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local from pod dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 29 21:17:30.790: INFO: Lookups using dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 failed for: [wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local] Mar 29 21:17:35.786: INFO: File wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local from pod dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 29 21:17:35.790: INFO: File jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local from pod dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 29 21:17:35.790: INFO: Lookups using dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 failed for: [wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local] Mar 29 21:17:40.787: INFO: File wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local from pod dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 29 21:17:40.791: INFO: File jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local from pod dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 29 21:17:40.791: INFO: Lookups using dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 failed for: [wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local] Mar 29 21:17:45.788: INFO: File wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local from pod dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 29 21:17:45.792: INFO: File jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local from pod dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 29 21:17:45.792: INFO: Lookups using dns-4184/dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 failed for: [wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local] Mar 29 21:17:50.791: INFO: DNS probes using dns-test-296c05d1-b6bf-470a-88ad-ab1ed185e1b4 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4184.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4184.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4184.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4184.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 29 21:17:57.297: INFO: DNS probes using dns-test-2818a87e-bc2c-4950-b964-0ae0879edaa8 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:17:57.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4184" for this suite. • [SLOW TEST:43.865 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":61,"skipped":1039,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:17:57.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:18:17.631: INFO: Container started at 2020-03-29 21:18:00 +0000 UTC, pod became ready at 2020-03-29 21:18:17 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:18:17.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7559" for this suite. • [SLOW TEST:20.238 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1044,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:18:17.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 29 21:18:22.230: INFO: Successfully updated pod "labelsupdate3e4cd502-797c-4700-8547-00767c0a1609" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:18:24.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-693" for this suite. • [SLOW TEST:6.625 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1045,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:18:24.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:18:54.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3375" for this suite. • [SLOW TEST:30.445 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1055,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:18:54.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:18:55.155: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:18:57.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113535, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113535, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113535, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113535, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:19:00.194: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 29 21:19:00.215: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:19:00.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5591" for this suite. STEP: Destroying namespace "webhook-5591-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.656 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":65,"skipped":1058,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:19:00.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:19:00.526: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ec8689b3-3aad-4096-aba9-abcf023630f1", Controller:(*bool)(0xc00323cc22), BlockOwnerDeletion:(*bool)(0xc00323cc23)}} Mar 29 21:19:00.554: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6cf109bb-ec3e-481f-874f-720f835fa4d0", Controller:(*bool)(0xc0032bf00a), BlockOwnerDeletion:(*bool)(0xc0032bf00b)}} Mar 29 21:19:00.619: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0c2897d8-9c4b-43cf-95d1-47fd2d8c9dd4", Controller:(*bool)(0xc00323cdca), BlockOwnerDeletion:(*bool)(0xc00323cdcb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:19:05.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4539" for this suite. • [SLOW TEST:5.281 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":66,"skipped":1060,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:19:05.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-9rxm STEP: Creating a pod to test atomic-volume-subpath Mar 29 21:19:05.765: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9rxm" in namespace "subpath-4744" to be "success or failure" Mar 29 21:19:05.780: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Pending", Reason="", readiness=false. Elapsed: 14.850393ms Mar 29 21:19:07.784: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019294568s Mar 29 21:19:09.788: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Running", Reason="", readiness=true. Elapsed: 4.023091455s Mar 29 21:19:11.792: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Running", Reason="", readiness=true. Elapsed: 6.026811588s Mar 29 21:19:13.796: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Running", Reason="", readiness=true. Elapsed: 8.031155024s Mar 29 21:19:15.800: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Running", Reason="", readiness=true. Elapsed: 10.035323215s Mar 29 21:19:17.804: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Running", Reason="", readiness=true. Elapsed: 12.039439442s Mar 29 21:19:19.807: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Running", Reason="", readiness=true. Elapsed: 14.042569543s Mar 29 21:19:21.812: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Running", Reason="", readiness=true. Elapsed: 16.046897937s Mar 29 21:19:23.816: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Running", Reason="", readiness=true. Elapsed: 18.051094564s Mar 29 21:19:25.820: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Running", Reason="", readiness=true. Elapsed: 20.055244443s Mar 29 21:19:27.824: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Running", Reason="", readiness=true. Elapsed: 22.059042823s Mar 29 21:19:29.828: INFO: Pod "pod-subpath-test-configmap-9rxm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.062909056s STEP: Saw pod success Mar 29 21:19:29.828: INFO: Pod "pod-subpath-test-configmap-9rxm" satisfied condition "success or failure" Mar 29 21:19:29.831: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-9rxm container test-container-subpath-configmap-9rxm: STEP: delete the pod Mar 29 21:19:29.867: INFO: Waiting for pod pod-subpath-test-configmap-9rxm to disappear Mar 29 21:19:29.877: INFO: Pod pod-subpath-test-configmap-9rxm no longer exists STEP: Deleting pod pod-subpath-test-configmap-9rxm Mar 29 21:19:29.877: INFO: Deleting pod "pod-subpath-test-configmap-9rxm" in namespace "subpath-4744" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:19:29.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4744" for this suite. • [SLOW TEST:24.263 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":67,"skipped":1074,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:19:29.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:19:29.968: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:19:36.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6910" for this suite. • [SLOW TEST:6.433 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":68,"skipped":1089,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:19:36.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ea42b5a8-f758-4c45-bf5e-97ff4f17bfaf STEP: Creating a pod to test consume secrets Mar 29 21:19:36.430: INFO: Waiting up to 5m0s for pod "pod-secrets-b73cc28b-dbde-4e8f-905a-76cf12f2fea6" in namespace "secrets-2172" to be "success or failure" Mar 29 21:19:36.434: INFO: Pod "pod-secrets-b73cc28b-dbde-4e8f-905a-76cf12f2fea6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045175ms Mar 29 21:19:38.438: INFO: Pod "pod-secrets-b73cc28b-dbde-4e8f-905a-76cf12f2fea6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007902152s Mar 29 21:19:40.442: INFO: Pod "pod-secrets-b73cc28b-dbde-4e8f-905a-76cf12f2fea6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011447435s STEP: Saw pod success Mar 29 21:19:40.442: INFO: Pod "pod-secrets-b73cc28b-dbde-4e8f-905a-76cf12f2fea6" satisfied condition "success or failure" Mar 29 21:19:40.444: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b73cc28b-dbde-4e8f-905a-76cf12f2fea6 container secret-volume-test: STEP: delete the pod Mar 29 21:19:40.501: INFO: Waiting for pod pod-secrets-b73cc28b-dbde-4e8f-905a-76cf12f2fea6 to disappear Mar 29 21:19:40.512: INFO: Pod pod-secrets-b73cc28b-dbde-4e8f-905a-76cf12f2fea6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:19:40.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2172" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:19:40.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 29 21:19:40.698: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:40.703: INFO: Number of nodes with available pods: 0 Mar 29 21:19:40.704: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:41.733: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:41.737: INFO: Number of nodes with available pods: 0 Mar 29 21:19:41.737: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:42.709: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:42.712: INFO: Number of nodes with available pods: 0 Mar 29 21:19:42.712: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:43.708: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:43.711: INFO: Number of nodes with available pods: 0 Mar 29 21:19:43.711: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:44.708: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:44.711: INFO: Number of nodes with available pods: 2 Mar 29 21:19:44.711: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 29 21:19:44.729: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:44.732: INFO: Number of nodes with available pods: 1 Mar 29 21:19:44.732: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:45.741: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:45.745: INFO: Number of nodes with available pods: 1 Mar 29 21:19:45.745: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:46.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:46.740: INFO: Number of nodes with available pods: 1 Mar 29 21:19:46.740: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:47.738: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:47.741: INFO: Number of nodes with available pods: 1 Mar 29 21:19:47.741: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:48.818: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:48.822: INFO: Number of nodes with available pods: 1 Mar 29 21:19:48.822: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:49.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:49.740: INFO: Number of nodes with available pods: 1 Mar 29 21:19:49.740: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:50.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:50.741: INFO: Number of nodes with available pods: 1 Mar 29 21:19:50.741: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:51.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:51.740: INFO: Number of nodes with available pods: 1 Mar 29 21:19:51.740: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:52.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:52.740: INFO: Number of nodes with available pods: 1 Mar 29 21:19:52.741: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:53.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:53.740: INFO: Number of nodes with available pods: 1 Mar 29 21:19:53.740: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:54.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:54.741: INFO: Number of nodes with available pods: 1 Mar 29 21:19:54.741: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:55.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:55.740: INFO: Number of nodes with available pods: 1 Mar 29 21:19:55.740: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:56.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:56.741: INFO: Number of nodes with available pods: 1 Mar 29 21:19:56.741: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:57.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:57.741: INFO: Number of nodes with available pods: 1 Mar 29 21:19:57.741: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:58.739: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:58.742: INFO: Number of nodes with available pods: 1 Mar 29 21:19:58.742: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:19:59.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:19:59.741: INFO: Number of nodes with available pods: 1 Mar 29 21:19:59.741: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:20:00.758: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:20:00.761: INFO: Number of nodes with available pods: 1 Mar 29 21:20:00.761: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:20:01.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:20:01.741: INFO: Number of nodes with available pods: 1 Mar 29 21:20:01.741: INFO: Node jerma-worker is running more than one daemon pod Mar 29 21:20:02.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 21:20:02.740: INFO: Number of nodes with available pods: 2 Mar 29 21:20:02.741: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9595, will wait for the garbage collector to delete the pods Mar 29 21:20:02.802: INFO: Deleting DaemonSet.extensions daemon-set took: 6.002494ms Mar 29 21:20:03.103: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.212461ms Mar 29 21:20:09.606: INFO: Number of nodes with available pods: 0 Mar 29 21:20:09.606: INFO: Number of running nodes: 0, number of available pods: 0 Mar 29 21:20:09.608: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9595/daemonsets","resourceVersion":"3786023"},"items":null} Mar 29 21:20:09.611: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9595/pods","resourceVersion":"3786023"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:20:09.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9595" for this suite. • [SLOW TEST:29.123 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":70,"skipped":1129,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:20:09.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 29 21:20:09.691: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 29 21:20:09.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1458' Mar 29 21:20:12.711: INFO: stderr: "" Mar 29 21:20:12.712: INFO: stdout: "service/agnhost-slave created\n" Mar 29 21:20:12.712: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 29 21:20:12.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1458' Mar 29 21:20:12.993: INFO: stderr: "" Mar 29 21:20:12.993: INFO: stdout: "service/agnhost-master created\n" Mar 29 21:20:12.994: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 29 21:20:12.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1458' Mar 29 21:20:13.260: INFO: stderr: "" Mar 29 21:20:13.260: INFO: stdout: "service/frontend created\n" Mar 29 21:20:13.260: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 29 21:20:13.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1458' Mar 29 21:20:13.511: INFO: stderr: "" Mar 29 21:20:13.511: INFO: stdout: "deployment.apps/frontend created\n" Mar 29 21:20:13.511: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 29 21:20:13.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1458' Mar 29 21:20:13.794: INFO: stderr: "" Mar 29 21:20:13.794: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 29 21:20:13.795: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 29 21:20:13.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1458' Mar 29 21:20:14.055: INFO: stderr: "" Mar 29 21:20:14.055: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 29 21:20:14.055: INFO: Waiting for all frontend pods to be Running. Mar 29 21:20:24.106: INFO: Waiting for frontend to serve content. Mar 29 21:20:24.116: INFO: Trying to add a new entry to the guestbook. Mar 29 21:20:24.129: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 29 21:20:24.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1458' Mar 29 21:20:24.261: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 29 21:20:24.261: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 29 21:20:24.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1458' Mar 29 21:20:24.431: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 29 21:20:24.431: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 29 21:20:24.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1458' Mar 29 21:20:24.596: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 29 21:20:24.596: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 29 21:20:24.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1458' Mar 29 21:20:24.697: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 29 21:20:24.697: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 29 21:20:24.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1458' Mar 29 21:20:24.824: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 29 21:20:24.824: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 29 21:20:24.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1458' Mar 29 21:20:24.938: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 29 21:20:24.939: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:20:24.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1458" for this suite. • [SLOW TEST:15.300 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":71,"skipped":1151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:20:24.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:20:41.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6248" for this suite. • [SLOW TEST:16.209 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":72,"skipped":1217,"failed":0} SSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:20:41.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:20:41.269: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-b4b1caf7-616f-4cb1-9a07-c32c74446d90" in namespace "security-context-test-9097" to be "success or failure" Mar 29 21:20:41.282: INFO: Pod "alpine-nnp-false-b4b1caf7-616f-4cb1-9a07-c32c74446d90": Phase="Pending", Reason="", readiness=false. Elapsed: 13.311526ms Mar 29 21:20:43.286: INFO: Pod "alpine-nnp-false-b4b1caf7-616f-4cb1-9a07-c32c74446d90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017113219s Mar 29 21:20:45.290: INFO: Pod "alpine-nnp-false-b4b1caf7-616f-4cb1-9a07-c32c74446d90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02139527s Mar 29 21:20:45.290: INFO: Pod "alpine-nnp-false-b4b1caf7-616f-4cb1-9a07-c32c74446d90" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:20:45.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9097" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1220,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:20:45.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-5db26aa7-d974-4125-ae64-62195295990f STEP: Creating a pod to test consume secrets Mar 29 21:20:45.410: INFO: Waiting up to 5m0s for pod "pod-secrets-c4c76b37-526d-4d5b-9dc5-667adb10596a" in namespace "secrets-6189" to be "success or failure" Mar 29 21:20:45.419: INFO: Pod "pod-secrets-c4c76b37-526d-4d5b-9dc5-667adb10596a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.097182ms Mar 29 21:20:47.423: INFO: Pod "pod-secrets-c4c76b37-526d-4d5b-9dc5-667adb10596a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012745057s Mar 29 21:20:49.427: INFO: Pod "pod-secrets-c4c76b37-526d-4d5b-9dc5-667adb10596a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016766416s STEP: Saw pod success Mar 29 21:20:49.427: INFO: Pod "pod-secrets-c4c76b37-526d-4d5b-9dc5-667adb10596a" satisfied condition "success or failure" Mar 29 21:20:49.430: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c4c76b37-526d-4d5b-9dc5-667adb10596a container secret-volume-test: STEP: delete the pod Mar 29 21:20:49.508: INFO: Waiting for pod pod-secrets-c4c76b37-526d-4d5b-9dc5-667adb10596a to disappear Mar 29 21:20:49.516: INFO: Pod pod-secrets-c4c76b37-526d-4d5b-9dc5-667adb10596a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:20:49.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6189" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1230,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:20:49.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4319 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 29 21:20:49.563: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 29 21:21:15.726: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.198 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4319 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:21:15.726: INFO: >>> kubeConfig: /root/.kube/config I0329 21:21:15.756980 6 log.go:172] (0xc00340e630) (0xc0017d1040) Create stream I0329 21:21:15.757009 6 log.go:172] (0xc00340e630) (0xc0017d1040) Stream added, broadcasting: 1 I0329 21:21:15.758887 6 log.go:172] (0xc00340e630) Reply frame received for 1 I0329 21:21:15.758935 6 log.go:172] (0xc00340e630) (0xc001e95ae0) Create stream I0329 21:21:15.758951 6 log.go:172] (0xc00340e630) (0xc001e95ae0) Stream added, broadcasting: 3 I0329 21:21:15.760057 6 log.go:172] (0xc00340e630) Reply frame received for 3 I0329 21:21:15.760092 6 log.go:172] (0xc00340e630) (0xc0017d10e0) Create stream I0329 21:21:15.760103 6 log.go:172] (0xc00340e630) (0xc0017d10e0) Stream added, broadcasting: 5 I0329 21:21:15.761005 6 log.go:172] (0xc00340e630) Reply frame received for 5 I0329 21:21:16.823181 6 log.go:172] (0xc00340e630) Data frame received for 3 I0329 21:21:16.823226 6 log.go:172] (0xc001e95ae0) (3) Data frame handling I0329 21:21:16.823262 6 log.go:172] (0xc001e95ae0) (3) Data frame sent I0329 21:21:16.823286 6 log.go:172] (0xc00340e630) Data frame received for 3 I0329 21:21:16.823306 6 log.go:172] (0xc001e95ae0) (3) Data frame handling I0329 21:21:16.823434 6 log.go:172] (0xc00340e630) Data frame received for 5 I0329 21:21:16.823460 6 log.go:172] (0xc0017d10e0) (5) Data frame handling I0329 21:21:16.825601 6 log.go:172] (0xc00340e630) Data frame received for 1 I0329 21:21:16.825618 6 log.go:172] (0xc0017d1040) (1) Data frame handling I0329 21:21:16.825625 6 log.go:172] (0xc0017d1040) (1) Data frame sent I0329 21:21:16.825635 6 log.go:172] (0xc00340e630) (0xc0017d1040) Stream removed, broadcasting: 1 I0329 21:21:16.825645 6 log.go:172] (0xc00340e630) Go away received I0329 21:21:16.825797 6 log.go:172] (0xc00340e630) (0xc0017d1040) Stream removed, broadcasting: 1 I0329 21:21:16.825865 6 log.go:172] (0xc00340e630) (0xc001e95ae0) Stream removed, broadcasting: 3 I0329 21:21:16.825901 6 log.go:172] (0xc00340e630) (0xc0017d10e0) Stream removed, broadcasting: 5 Mar 29 21:21:16.825: INFO: Found all expected endpoints: [netserver-0] Mar 29 21:21:16.829: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.7 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4319 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:21:16.829: INFO: >>> kubeConfig: /root/.kube/config I0329 21:21:16.857078 6 log.go:172] (0xc0028d2e70) (0xc001808140) Create stream I0329 21:21:16.857215 6 log.go:172] (0xc0028d2e70) (0xc001808140) Stream added, broadcasting: 1 I0329 21:21:16.858759 6 log.go:172] (0xc0028d2e70) Reply frame received for 1 I0329 21:21:16.858787 6 log.go:172] (0xc0028d2e70) (0xc002a78aa0) Create stream I0329 21:21:16.858797 6 log.go:172] (0xc0028d2e70) (0xc002a78aa0) Stream added, broadcasting: 3 I0329 21:21:16.859501 6 log.go:172] (0xc0028d2e70) Reply frame received for 3 I0329 21:21:16.859521 6 log.go:172] (0xc0028d2e70) (0xc002a78b40) Create stream I0329 21:21:16.859532 6 log.go:172] (0xc0028d2e70) (0xc002a78b40) Stream added, broadcasting: 5 I0329 21:21:16.860401 6 log.go:172] (0xc0028d2e70) Reply frame received for 5 I0329 21:21:17.948318 6 log.go:172] (0xc0028d2e70) Data frame received for 3 I0329 21:21:17.948376 6 log.go:172] (0xc002a78aa0) (3) Data frame handling I0329 21:21:17.948455 6 log.go:172] (0xc0028d2e70) Data frame received for 5 I0329 21:21:17.948488 6 log.go:172] (0xc002a78b40) (5) Data frame handling I0329 21:21:17.948519 6 log.go:172] (0xc002a78aa0) (3) Data frame sent I0329 21:21:17.948545 6 log.go:172] (0xc0028d2e70) Data frame received for 3 I0329 21:21:17.948559 6 log.go:172] (0xc002a78aa0) (3) Data frame handling I0329 21:21:17.950903 6 log.go:172] (0xc0028d2e70) Data frame received for 1 I0329 21:21:17.950920 6 log.go:172] (0xc001808140) (1) Data frame handling I0329 21:21:17.950944 6 log.go:172] (0xc001808140) (1) Data frame sent I0329 21:21:17.950956 6 log.go:172] (0xc0028d2e70) (0xc001808140) Stream removed, broadcasting: 1 I0329 21:21:17.951016 6 log.go:172] (0xc0028d2e70) (0xc001808140) Stream removed, broadcasting: 1 I0329 21:21:17.951027 6 log.go:172] (0xc0028d2e70) (0xc002a78aa0) Stream removed, broadcasting: 3 I0329 21:21:17.951036 6 log.go:172] (0xc0028d2e70) (0xc002a78b40) Stream removed, broadcasting: 5 Mar 29 21:21:17.951: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:21:17.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0329 21:21:17.951129 6 log.go:172] (0xc0028d2e70) Go away received STEP: Destroying namespace "pod-network-test-4319" for this suite. • [SLOW TEST:28.437 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:21:17.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 29 21:21:18.005: INFO: Waiting up to 5m0s for pod "var-expansion-2edea11f-966b-436f-9ab8-66eb709e9839" in namespace "var-expansion-3516" to be "success or failure" Mar 29 21:21:18.033: INFO: Pod "var-expansion-2edea11f-966b-436f-9ab8-66eb709e9839": Phase="Pending", Reason="", readiness=false. Elapsed: 28.091635ms Mar 29 21:21:20.037: INFO: Pod "var-expansion-2edea11f-966b-436f-9ab8-66eb709e9839": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032560503s Mar 29 21:21:22.042: INFO: Pod "var-expansion-2edea11f-966b-436f-9ab8-66eb709e9839": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03699587s STEP: Saw pod success Mar 29 21:21:22.042: INFO: Pod "var-expansion-2edea11f-966b-436f-9ab8-66eb709e9839" satisfied condition "success or failure" Mar 29 21:21:22.045: INFO: Trying to get logs from node jerma-worker pod var-expansion-2edea11f-966b-436f-9ab8-66eb709e9839 container dapi-container: STEP: delete the pod Mar 29 21:21:22.094: INFO: Waiting for pod var-expansion-2edea11f-966b-436f-9ab8-66eb709e9839 to disappear Mar 29 21:21:22.103: INFO: Pod var-expansion-2edea11f-966b-436f-9ab8-66eb709e9839 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:21:22.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3516" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1315,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:21:22.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4670 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4670 STEP: creating replication controller externalsvc in namespace services-4670 I0329 21:21:22.233306 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4670, replica count: 2 I0329 21:21:25.283669 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0329 21:21:28.283914 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 29 21:21:28.335: INFO: Creating new exec pod Mar 29 21:21:32.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4670 execpodvvqlk -- /bin/sh -x -c nslookup clusterip-service' Mar 29 21:21:32.659: INFO: stderr: "I0329 21:21:32.544758 467 log.go:172] (0xc0009da9a0) (0xc000727ae0) Create stream\nI0329 21:21:32.544841 467 log.go:172] (0xc0009da9a0) (0xc000727ae0) Stream added, broadcasting: 1\nI0329 21:21:32.548206 467 log.go:172] (0xc0009da9a0) Reply frame received for 1\nI0329 21:21:32.548260 467 log.go:172] (0xc0009da9a0) (0xc000afc000) Create stream\nI0329 21:21:32.548276 467 log.go:172] (0xc0009da9a0) (0xc000afc000) Stream added, broadcasting: 3\nI0329 21:21:32.549377 467 log.go:172] (0xc0009da9a0) Reply frame received for 3\nI0329 21:21:32.549425 467 log.go:172] (0xc0009da9a0) (0xc000727cc0) Create stream\nI0329 21:21:32.549440 467 log.go:172] (0xc0009da9a0) (0xc000727cc0) Stream added, broadcasting: 5\nI0329 21:21:32.550588 467 log.go:172] (0xc0009da9a0) Reply frame received for 5\nI0329 21:21:32.634357 467 log.go:172] (0xc0009da9a0) Data frame received for 5\nI0329 21:21:32.634391 467 log.go:172] (0xc000727cc0) (5) Data frame handling\nI0329 21:21:32.634437 467 log.go:172] (0xc000727cc0) (5) Data frame sent\n+ nslookup clusterip-service\nI0329 21:21:32.643700 467 log.go:172] (0xc0009da9a0) Data frame received for 3\nI0329 21:21:32.643732 467 log.go:172] (0xc000afc000) (3) Data frame handling\nI0329 21:21:32.643753 467 log.go:172] (0xc000afc000) (3) Data frame sent\nI0329 21:21:32.645106 467 log.go:172] (0xc0009da9a0) Data frame received for 3\nI0329 21:21:32.645230 467 log.go:172] (0xc000afc000) (3) Data frame handling\nI0329 21:21:32.645240 467 log.go:172] (0xc000afc000) (3) Data frame sent\nI0329 21:21:32.645958 467 log.go:172] (0xc0009da9a0) Data frame received for 5\nI0329 21:21:32.645975 467 log.go:172] (0xc000727cc0) (5) Data frame handling\nI0329 21:21:32.646109 467 log.go:172] (0xc0009da9a0) Data frame received for 3\nI0329 21:21:32.646141 467 log.go:172] (0xc000afc000) (3) Data frame handling\nI0329 21:21:32.655761 467 log.go:172] (0xc0009da9a0) Data frame received for 1\nI0329 21:21:32.655791 467 log.go:172] (0xc000727ae0) (1) Data frame handling\nI0329 21:21:32.655807 467 log.go:172] (0xc000727ae0) (1) Data frame sent\nI0329 21:21:32.655821 467 log.go:172] (0xc0009da9a0) (0xc000727ae0) Stream removed, broadcasting: 1\nI0329 21:21:32.655836 467 log.go:172] (0xc0009da9a0) Go away received\nI0329 21:21:32.656142 467 log.go:172] (0xc0009da9a0) (0xc000727ae0) Stream removed, broadcasting: 1\nI0329 21:21:32.656163 467 log.go:172] (0xc0009da9a0) (0xc000afc000) Stream removed, broadcasting: 3\nI0329 21:21:32.656172 467 log.go:172] (0xc0009da9a0) (0xc000727cc0) Stream removed, broadcasting: 5\n" Mar 29 21:21:32.660: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4670.svc.cluster.local\tcanonical name = externalsvc.services-4670.svc.cluster.local.\nName:\texternalsvc.services-4670.svc.cluster.local\nAddress: 10.108.153.125\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4670, will wait for the garbage collector to delete the pods Mar 29 21:21:32.719: INFO: Deleting ReplicationController externalsvc took: 5.971785ms Mar 29 21:21:33.019: INFO: Terminating ReplicationController externalsvc pods took: 300.251972ms Mar 29 21:21:37.653: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:21:37.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4670" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:15.588 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":77,"skipped":1318,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:21:37.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:21:37.747: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-91805581-dee6-408e-8004-d37a3aecc25c" in namespace "security-context-test-6026" to be "success or failure" Mar 29 21:21:37.770: INFO: Pod "busybox-privileged-false-91805581-dee6-408e-8004-d37a3aecc25c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.788013ms Mar 29 21:21:39.774: INFO: Pod "busybox-privileged-false-91805581-dee6-408e-8004-d37a3aecc25c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026972432s Mar 29 21:21:41.778: INFO: Pod "busybox-privileged-false-91805581-dee6-408e-8004-d37a3aecc25c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031183048s Mar 29 21:21:41.778: INFO: Pod "busybox-privileged-false-91805581-dee6-408e-8004-d37a3aecc25c" satisfied condition "success or failure" Mar 29 21:21:41.785: INFO: Got logs for pod "busybox-privileged-false-91805581-dee6-408e-8004-d37a3aecc25c": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:21:41.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6026" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:21:41.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 29 21:21:41.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2561' Mar 29 21:21:42.119: INFO: stderr: "" Mar 29 21:21:42.119: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 29 21:21:42.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2561' Mar 29 21:21:42.259: INFO: stderr: "" Mar 29 21:21:42.259: INFO: stdout: "update-demo-nautilus-qlbdt update-demo-nautilus-wmcvp " Mar 29 21:21:42.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qlbdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2561' Mar 29 21:21:42.374: INFO: stderr: "" Mar 29 21:21:42.374: INFO: stdout: "" Mar 29 21:21:42.374: INFO: update-demo-nautilus-qlbdt is created but not running Mar 29 21:21:47.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2561' Mar 29 21:21:47.467: INFO: stderr: "" Mar 29 21:21:47.467: INFO: stdout: "update-demo-nautilus-qlbdt update-demo-nautilus-wmcvp " Mar 29 21:21:47.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qlbdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2561' Mar 29 21:21:47.564: INFO: stderr: "" Mar 29 21:21:47.564: INFO: stdout: "true" Mar 29 21:21:47.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qlbdt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2561' Mar 29 21:21:47.652: INFO: stderr: "" Mar 29 21:21:47.652: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 29 21:21:47.652: INFO: validating pod update-demo-nautilus-qlbdt Mar 29 21:21:47.656: INFO: got data: { "image": "nautilus.jpg" } Mar 29 21:21:47.656: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 29 21:21:47.656: INFO: update-demo-nautilus-qlbdt is verified up and running Mar 29 21:21:47.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wmcvp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2561' Mar 29 21:21:47.763: INFO: stderr: "" Mar 29 21:21:47.763: INFO: stdout: "true" Mar 29 21:21:47.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wmcvp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2561' Mar 29 21:21:47.850: INFO: stderr: "" Mar 29 21:21:47.850: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 29 21:21:47.850: INFO: validating pod update-demo-nautilus-wmcvp Mar 29 21:21:47.854: INFO: got data: { "image": "nautilus.jpg" } Mar 29 21:21:47.854: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 29 21:21:47.854: INFO: update-demo-nautilus-wmcvp is verified up and running STEP: using delete to clean up resources Mar 29 21:21:47.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2561' Mar 29 21:21:47.964: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 29 21:21:47.964: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 29 21:21:47.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2561' Mar 29 21:21:48.068: INFO: stderr: "No resources found in kubectl-2561 namespace.\n" Mar 29 21:21:48.068: INFO: stdout: "" Mar 29 21:21:48.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2561 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 29 21:21:48.161: INFO: stderr: "" Mar 29 21:21:48.161: INFO: stdout: "update-demo-nautilus-qlbdt\nupdate-demo-nautilus-wmcvp\n" Mar 29 21:21:48.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2561' Mar 29 21:21:48.840: INFO: stderr: "No resources found in kubectl-2561 namespace.\n" Mar 29 21:21:48.840: INFO: stdout: "" Mar 29 21:21:48.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2561 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 29 21:21:48.929: INFO: stderr: "" Mar 29 21:21:48.929: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:21:48.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2561" for this suite. • [SLOW TEST:7.154 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":79,"skipped":1365,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:21:48.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 29 21:21:50.036: INFO: Pod name wrapped-volume-race-e060fffc-d789-4aa7-a1af-a881f3698c59: Found 0 pods out of 5 Mar 29 21:21:55.065: INFO: Pod name wrapped-volume-race-e060fffc-d789-4aa7-a1af-a881f3698c59: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e060fffc-d789-4aa7-a1af-a881f3698c59 in namespace emptydir-wrapper-9751, will wait for the garbage collector to delete the pods Mar 29 21:22:07.188: INFO: Deleting ReplicationController wrapped-volume-race-e060fffc-d789-4aa7-a1af-a881f3698c59 took: 7.074308ms Mar 29 21:22:07.588: INFO: Terminating ReplicationController wrapped-volume-race-e060fffc-d789-4aa7-a1af-a881f3698c59 pods took: 400.344414ms STEP: Creating RC which spawns configmap-volume pods Mar 29 21:22:20.341: INFO: Pod name wrapped-volume-race-b370cc55-a395-44d7-a85f-eea46f098888: Found 0 pods out of 5 Mar 29 21:22:25.348: INFO: Pod name wrapped-volume-race-b370cc55-a395-44d7-a85f-eea46f098888: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b370cc55-a395-44d7-a85f-eea46f098888 in namespace emptydir-wrapper-9751, will wait for the garbage collector to delete the pods Mar 29 21:22:39.431: INFO: Deleting ReplicationController wrapped-volume-race-b370cc55-a395-44d7-a85f-eea46f098888 took: 6.09743ms Mar 29 21:22:39.731: INFO: Terminating ReplicationController wrapped-volume-race-b370cc55-a395-44d7-a85f-eea46f098888 pods took: 300.243171ms STEP: Creating RC which spawns configmap-volume pods Mar 29 21:22:50.406: INFO: Pod name wrapped-volume-race-93dc739b-5893-4e4c-9915-9aa8b283c2c9: Found 0 pods out of 5 Mar 29 21:22:55.414: INFO: Pod name wrapped-volume-race-93dc739b-5893-4e4c-9915-9aa8b283c2c9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-93dc739b-5893-4e4c-9915-9aa8b283c2c9 in namespace emptydir-wrapper-9751, will wait for the garbage collector to delete the pods Mar 29 21:23:09.510: INFO: Deleting ReplicationController wrapped-volume-race-93dc739b-5893-4e4c-9915-9aa8b283c2c9 took: 7.806169ms Mar 29 21:23:09.811: INFO: Terminating ReplicationController wrapped-volume-race-93dc739b-5893-4e4c-9915-9aa8b283c2c9 pods took: 300.274771ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:23:20.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9751" for this suite. • [SLOW TEST:91.236 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":80,"skipped":1370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:23:20.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-b0d16786-91fb-40bc-9980-0b17e9c62716 STEP: Creating a pod to test consume configMaps Mar 29 21:23:20.268: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c473d03d-e84f-42eb-bf59-cded5d9a2337" in namespace "projected-9140" to be "success or failure" Mar 29 21:23:20.272: INFO: Pod "pod-projected-configmaps-c473d03d-e84f-42eb-bf59-cded5d9a2337": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216303ms Mar 29 21:23:22.276: INFO: Pod "pod-projected-configmaps-c473d03d-e84f-42eb-bf59-cded5d9a2337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008371276s Mar 29 21:23:24.280: INFO: Pod "pod-projected-configmaps-c473d03d-e84f-42eb-bf59-cded5d9a2337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012419087s STEP: Saw pod success Mar 29 21:23:24.280: INFO: Pod "pod-projected-configmaps-c473d03d-e84f-42eb-bf59-cded5d9a2337" satisfied condition "success or failure" Mar 29 21:23:24.283: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-c473d03d-e84f-42eb-bf59-cded5d9a2337 container projected-configmap-volume-test: STEP: delete the pod Mar 29 21:23:24.316: INFO: Waiting for pod pod-projected-configmaps-c473d03d-e84f-42eb-bf59-cded5d9a2337 to disappear Mar 29 21:23:24.320: INFO: Pod pod-projected-configmaps-c473d03d-e84f-42eb-bf59-cded5d9a2337 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:23:24.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9140" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1393,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:23:24.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5472db91-2d87-4fc2-a526-b7c918061c19 STEP: Creating a pod to test consume secrets Mar 29 21:23:24.614: INFO: Waiting up to 5m0s for pod "pod-secrets-46014b00-1eb3-4e08-8987-dd5990a1682f" in namespace "secrets-8353" to be "success or failure" Mar 29 21:23:24.619: INFO: Pod "pod-secrets-46014b00-1eb3-4e08-8987-dd5990a1682f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.209249ms Mar 29 21:23:26.770: INFO: Pod "pod-secrets-46014b00-1eb3-4e08-8987-dd5990a1682f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155463501s Mar 29 21:23:28.774: INFO: Pod "pod-secrets-46014b00-1eb3-4e08-8987-dd5990a1682f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15934853s STEP: Saw pod success Mar 29 21:23:28.774: INFO: Pod "pod-secrets-46014b00-1eb3-4e08-8987-dd5990a1682f" satisfied condition "success or failure" Mar 29 21:23:28.776: INFO: Trying to get logs from node jerma-worker pod pod-secrets-46014b00-1eb3-4e08-8987-dd5990a1682f container secret-volume-test: STEP: delete the pod Mar 29 21:23:28.799: INFO: Waiting for pod pod-secrets-46014b00-1eb3-4e08-8987-dd5990a1682f to disappear Mar 29 21:23:28.805: INFO: Pod pod-secrets-46014b00-1eb3-4e08-8987-dd5990a1682f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:23:28.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8353" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1397,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:23:28.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 29 21:23:28.890: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:23:36.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2512" for this suite. • [SLOW TEST:7.415 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":83,"skipped":1417,"failed":0} SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:23:36.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5749, will wait for the garbage collector to delete the pods Mar 29 21:23:40.369: INFO: Deleting Job.batch foo took: 6.444751ms Mar 29 21:23:40.669: INFO: Terminating Job.batch foo pods took: 300.222975ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:24:13.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5749" for this suite. • [SLOW TEST:37.649 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":84,"skipped":1420,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:24:13.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 29 21:24:14.389: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 29 21:24:16.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113854, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113854, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113854, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113854, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:24:19.500: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:24:19.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:24:20.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8371" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.905 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":85,"skipped":1421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:24:20.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:24:20.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c5a82a5-cf14-4b7c-8389-cb6f8144c905" in namespace "projected-2625" to be "success or failure" Mar 29 21:24:20.889: INFO: Pod "downwardapi-volume-4c5a82a5-cf14-4b7c-8389-cb6f8144c905": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470946ms Mar 29 21:24:22.894: INFO: Pod "downwardapi-volume-4c5a82a5-cf14-4b7c-8389-cb6f8144c905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010749537s Mar 29 21:24:24.898: INFO: Pod "downwardapi-volume-4c5a82a5-cf14-4b7c-8389-cb6f8144c905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015192128s STEP: Saw pod success Mar 29 21:24:24.898: INFO: Pod "downwardapi-volume-4c5a82a5-cf14-4b7c-8389-cb6f8144c905" satisfied condition "success or failure" Mar 29 21:24:24.901: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4c5a82a5-cf14-4b7c-8389-cb6f8144c905 container client-container: STEP: delete the pod Mar 29 21:24:24.954: INFO: Waiting for pod downwardapi-volume-4c5a82a5-cf14-4b7c-8389-cb6f8144c905 to disappear Mar 29 21:24:24.962: INFO: Pod downwardapi-volume-4c5a82a5-cf14-4b7c-8389-cb6f8144c905 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:24:24.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2625" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:24:24.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1835.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1835.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 29 21:24:31.085: INFO: DNS probes using dns-1835/dns-test-f46fb775-7bca-4a4f-b669-3083fbea40d5 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:24:31.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1835" for this suite. • [SLOW TEST:6.185 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":87,"skipped":1473,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:24:31.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 29 21:24:31.381: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:24:46.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4293" for this suite. • [SLOW TEST:15.377 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":88,"skipped":1481,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:24:46.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-2ee60be2-f8a0-4283-821b-3292e089ae74 STEP: Creating a pod to test consume secrets Mar 29 21:24:46.616: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-888acf29-09ef-4444-998a-03bdf376d4d6" in namespace "projected-8035" to be "success or failure" Mar 29 21:24:46.632: INFO: Pod "pod-projected-secrets-888acf29-09ef-4444-998a-03bdf376d4d6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.837864ms Mar 29 21:24:48.636: INFO: Pod "pod-projected-secrets-888acf29-09ef-4444-998a-03bdf376d4d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019363202s Mar 29 21:24:50.640: INFO: Pod "pod-projected-secrets-888acf29-09ef-4444-998a-03bdf376d4d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023739065s STEP: Saw pod success Mar 29 21:24:50.640: INFO: Pod "pod-projected-secrets-888acf29-09ef-4444-998a-03bdf376d4d6" satisfied condition "success or failure" Mar 29 21:24:50.643: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-888acf29-09ef-4444-998a-03bdf376d4d6 container projected-secret-volume-test: STEP: delete the pod Mar 29 21:24:50.678: INFO: Waiting for pod pod-projected-secrets-888acf29-09ef-4444-998a-03bdf376d4d6 to disappear Mar 29 21:24:50.692: INFO: Pod pod-projected-secrets-888acf29-09ef-4444-998a-03bdf376d4d6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:24:50.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8035" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1496,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:24:50.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 29 21:24:50.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3927' Mar 29 21:24:50.865: INFO: stderr: "" Mar 29 21:24:50.865: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 29 21:24:55.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3927 -o json' Mar 29 21:24:56.004: INFO: stderr: "" Mar 29 21:24:56.004: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-29T21:24:50Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3927\",\n \"resourceVersion\": \"3788538\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3927/pods/e2e-test-httpd-pod\",\n \"uid\": \"1c136de7-9bcf-4fa8-83e2-54d522365070\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-vnh6g\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-vnh6g\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-vnh6g\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-29T21:24:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-29T21:24:53Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-29T21:24:53Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-29T21:24:50Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://6861db4a4eda9033aaf74302428445f8934762929d98afe3fd6d1249e5771b8f\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-29T21:24:53Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.12\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.12\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-29T21:24:50Z\"\n }\n}\n" STEP: replace the image in the pod Mar 29 21:24:56.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3927' Mar 29 21:24:56.313: INFO: stderr: "" Mar 29 21:24:56.313: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902 Mar 29 21:24:56.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3927' Mar 29 21:24:59.889: INFO: stderr: "" Mar 29 21:24:59.889: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:24:59.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3927" for this suite. • [SLOW TEST:9.205 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":90,"skipped":1502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:24:59.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-s4gn STEP: Creating a pod to test atomic-volume-subpath Mar 29 21:24:59.990: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-s4gn" in namespace "subpath-8564" to be "success or failure" Mar 29 21:24:59.994: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.527356ms Mar 29 21:25:02.004: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013515827s Mar 29 21:25:04.007: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Running", Reason="", readiness=true. Elapsed: 4.01683773s Mar 29 21:25:06.011: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Running", Reason="", readiness=true. Elapsed: 6.020140953s Mar 29 21:25:08.014: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Running", Reason="", readiness=true. Elapsed: 8.024052896s Mar 29 21:25:10.018: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Running", Reason="", readiness=true. Elapsed: 10.027949057s Mar 29 21:25:12.022: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Running", Reason="", readiness=true. Elapsed: 12.032071392s Mar 29 21:25:14.027: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Running", Reason="", readiness=true. Elapsed: 14.036235373s Mar 29 21:25:16.031: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Running", Reason="", readiness=true. Elapsed: 16.040396199s Mar 29 21:25:18.035: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Running", Reason="", readiness=true. Elapsed: 18.044428917s Mar 29 21:25:20.039: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Running", Reason="", readiness=true. Elapsed: 20.048862676s Mar 29 21:25:22.043: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Running", Reason="", readiness=true. Elapsed: 22.053049114s Mar 29 21:25:24.048: INFO: Pod "pod-subpath-test-projected-s4gn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057344406s STEP: Saw pod success Mar 29 21:25:24.048: INFO: Pod "pod-subpath-test-projected-s4gn" satisfied condition "success or failure" Mar 29 21:25:24.051: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-s4gn container test-container-subpath-projected-s4gn: STEP: delete the pod Mar 29 21:25:24.088: INFO: Waiting for pod pod-subpath-test-projected-s4gn to disappear Mar 29 21:25:24.106: INFO: Pod pod-subpath-test-projected-s4gn no longer exists STEP: Deleting pod pod-subpath-test-projected-s4gn Mar 29 21:25:24.106: INFO: Deleting pod "pod-subpath-test-projected-s4gn" in namespace "subpath-8564" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:25:24.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8564" for this suite. • [SLOW TEST:24.210 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":91,"skipped":1597,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:25:24.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-ls4rz in namespace proxy-4333 I0329 21:25:24.222961 6 runners.go:189] Created replication controller with name: proxy-service-ls4rz, namespace: proxy-4333, replica count: 1 I0329 21:25:25.274000 6 runners.go:189] proxy-service-ls4rz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0329 21:25:26.274334 6 runners.go:189] proxy-service-ls4rz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0329 21:25:27.274560 6 runners.go:189] proxy-service-ls4rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0329 21:25:28.274811 6 runners.go:189] proxy-service-ls4rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0329 21:25:29.275027 6 runners.go:189] proxy-service-ls4rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0329 21:25:30.275293 6 runners.go:189] proxy-service-ls4rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0329 21:25:31.275566 6 runners.go:189] proxy-service-ls4rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0329 21:25:32.275759 6 runners.go:189] proxy-service-ls4rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0329 21:25:33.276022 6 runners.go:189] proxy-service-ls4rz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0329 21:25:34.276265 6 runners.go:189] proxy-service-ls4rz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 29 21:25:34.280: INFO: setup took 10.091985361s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 29 21:25:34.292: INFO: (0) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 11.130022ms) Mar 29 21:25:34.292: INFO: (0) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 11.192115ms) Mar 29 21:25:34.292: INFO: (0) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 11.496515ms) Mar 29 21:25:34.292: INFO: (0) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 11.657904ms) Mar 29 21:25:34.292: INFO: (0) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 11.493334ms) Mar 29 21:25:34.292: INFO: (0) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 11.639581ms) Mar 29 21:25:34.292: INFO: (0) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 11.878216ms) Mar 29 21:25:34.292: INFO: (0) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 12.518145ms) Mar 29 21:25:34.293: INFO: (0) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 13.12281ms) Mar 29 21:25:34.293: INFO: (0) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 12.663363ms) Mar 29 21:25:34.294: INFO: (0) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 13.172275ms) Mar 29 21:25:34.298: INFO: (0) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 18.228259ms) Mar 29 21:25:34.298: INFO: (0) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 17.682798ms) Mar 29 21:25:34.299: INFO: (0) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test<... (200; 3.4209ms) Mar 29 21:25:34.303: INFO: (1) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 3.495336ms) Mar 29 21:25:34.303: INFO: (1) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 3.577991ms) Mar 29 21:25:34.303: INFO: (1) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 3.615666ms) Mar 29 21:25:34.303: INFO: (1) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test (200; 3.702486ms) Mar 29 21:25:34.303: INFO: (1) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 3.582303ms) Mar 29 21:25:34.303: INFO: (1) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 3.56773ms) Mar 29 21:25:34.304: INFO: (1) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 4.723573ms) Mar 29 21:25:34.306: INFO: (1) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 5.975186ms) Mar 29 21:25:34.306: INFO: (1) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 6.158932ms) Mar 29 21:25:34.306: INFO: (1) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 6.2977ms) Mar 29 21:25:34.306: INFO: (1) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 6.370655ms) Mar 29 21:25:34.306: INFO: (1) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 6.237113ms) Mar 29 21:25:34.310: INFO: (2) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 4.326814ms) Mar 29 21:25:34.311: INFO: (2) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 4.710067ms) Mar 29 21:25:34.311: INFO: (2) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 4.729169ms) Mar 29 21:25:34.311: INFO: (2) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 4.79945ms) Mar 29 21:25:34.311: INFO: (2) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: ... (200; 4.869817ms) Mar 29 21:25:34.311: INFO: (2) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 5.220105ms) Mar 29 21:25:34.312: INFO: (2) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 5.568839ms) Mar 29 21:25:34.312: INFO: (2) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 5.666212ms) Mar 29 21:25:34.312: INFO: (2) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 5.584444ms) Mar 29 21:25:34.312: INFO: (2) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 5.589892ms) Mar 29 21:25:34.312: INFO: (2) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 5.635012ms) Mar 29 21:25:34.312: INFO: (2) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 5.645156ms) Mar 29 21:25:34.312: INFO: (2) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 5.67957ms) Mar 29 21:25:34.312: INFO: (2) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 5.767841ms) Mar 29 21:25:34.312: INFO: (2) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 5.770471ms) Mar 29 21:25:34.319: INFO: (3) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 7.119181ms) Mar 29 21:25:34.319: INFO: (3) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 7.419096ms) Mar 29 21:25:34.319: INFO: (3) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 7.361547ms) Mar 29 21:25:34.321: INFO: (3) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 9.456501ms) Mar 29 21:25:34.322: INFO: (3) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 9.67462ms) Mar 29 21:25:34.322: INFO: (3) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 9.801117ms) Mar 29 21:25:34.322: INFO: (3) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 9.956903ms) Mar 29 21:25:34.322: INFO: (3) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 10.049464ms) Mar 29 21:25:34.322: INFO: (3) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 10.086012ms) Mar 29 21:25:34.322: INFO: (3) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 10.207447ms) Mar 29 21:25:34.322: INFO: (3) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 10.469606ms) Mar 29 21:25:34.322: INFO: (3) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 10.409962ms) Mar 29 21:25:34.323: INFO: (3) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 10.498111ms) Mar 29 21:25:34.323: INFO: (3) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: ... (200; 3.015966ms) Mar 29 21:25:34.326: INFO: (4) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 3.022174ms) Mar 29 21:25:34.326: INFO: (4) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 3.057095ms) Mar 29 21:25:34.328: INFO: (4) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test<... (200; 5.148298ms) Mar 29 21:25:34.329: INFO: (4) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 5.080358ms) Mar 29 21:25:34.329: INFO: (4) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 5.042229ms) Mar 29 21:25:34.329: INFO: (4) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 5.186249ms) Mar 29 21:25:34.329: INFO: (4) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 5.374952ms) Mar 29 21:25:34.330: INFO: (4) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 6.291648ms) Mar 29 21:25:34.330: INFO: (4) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 6.312159ms) Mar 29 21:25:34.330: INFO: (4) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 6.377555ms) Mar 29 21:25:34.330: INFO: (4) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 6.3953ms) Mar 29 21:25:34.330: INFO: (4) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 6.423702ms) Mar 29 21:25:34.334: INFO: (5) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 4.045074ms) Mar 29 21:25:34.334: INFO: (5) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 4.022818ms) Mar 29 21:25:34.334: INFO: (5) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 4.485939ms) Mar 29 21:25:34.334: INFO: (5) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 4.513049ms) Mar 29 21:25:34.335: INFO: (5) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test<... (200; 4.566469ms) Mar 29 21:25:34.335: INFO: (5) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 4.764579ms) Mar 29 21:25:34.335: INFO: (5) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 4.721099ms) Mar 29 21:25:34.335: INFO: (5) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 4.721918ms) Mar 29 21:25:34.335: INFO: (5) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 4.80767ms) Mar 29 21:25:34.336: INFO: (5) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 5.688135ms) Mar 29 21:25:34.336: INFO: (5) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 5.586362ms) Mar 29 21:25:34.336: INFO: (5) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 5.701979ms) Mar 29 21:25:34.336: INFO: (5) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 5.700422ms) Mar 29 21:25:34.336: INFO: (5) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 5.773534ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 4.017881ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 3.989245ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 3.97218ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 4.061065ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 4.024015ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 4.148512ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 4.208999ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 4.303605ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 4.448138ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 4.25358ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 4.450383ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 4.468807ms) Mar 29 21:25:34.340: INFO: (6) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 4.397406ms) Mar 29 21:25:34.341: INFO: (6) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test<... (200; 3.437858ms) Mar 29 21:25:34.345: INFO: (7) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 3.579853ms) Mar 29 21:25:34.345: INFO: (7) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 3.52605ms) Mar 29 21:25:34.345: INFO: (7) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 3.639879ms) Mar 29 21:25:34.345: INFO: (7) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 3.645898ms) Mar 29 21:25:34.345: INFO: (7) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test (200; 5.428187ms) Mar 29 21:25:34.347: INFO: (7) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 5.563809ms) Mar 29 21:25:34.347: INFO: (7) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 5.530617ms) Mar 29 21:25:34.347: INFO: (7) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 5.502532ms) Mar 29 21:25:34.347: INFO: (7) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 5.558022ms) Mar 29 21:25:34.347: INFO: (7) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 5.583208ms) Mar 29 21:25:34.350: INFO: (8) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 2.473788ms) Mar 29 21:25:34.350: INFO: (8) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 2.387494ms) Mar 29 21:25:34.350: INFO: (8) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 2.699352ms) Mar 29 21:25:34.350: INFO: (8) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 2.977053ms) Mar 29 21:25:34.350: INFO: (8) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 2.411151ms) Mar 29 21:25:34.350: INFO: (8) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 3.074556ms) Mar 29 21:25:34.351: INFO: (8) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 4.037171ms) Mar 29 21:25:34.351: INFO: (8) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 3.739026ms) Mar 29 21:25:34.351: INFO: (8) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 3.920945ms) Mar 29 21:25:34.351: INFO: (8) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: ... (200; 3.76622ms) Mar 29 21:25:34.352: INFO: (8) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 3.693793ms) Mar 29 21:25:34.352: INFO: (8) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 3.866434ms) Mar 29 21:25:34.353: INFO: (8) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 4.69368ms) Mar 29 21:25:34.353: INFO: (8) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 4.661927ms) Mar 29 21:25:34.353: INFO: (8) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 4.769189ms) Mar 29 21:25:34.355: INFO: (9) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 2.405401ms) Mar 29 21:25:34.356: INFO: (9) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 2.605161ms) Mar 29 21:25:34.356: INFO: (9) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 3.384843ms) Mar 29 21:25:34.357: INFO: (9) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 3.678506ms) Mar 29 21:25:34.357: INFO: (9) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 3.742524ms) Mar 29 21:25:34.357: INFO: (9) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 4.141805ms) Mar 29 21:25:34.357: INFO: (9) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 4.145856ms) Mar 29 21:25:34.357: INFO: (9) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 4.275844ms) Mar 29 21:25:34.358: INFO: (9) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 4.555385ms) Mar 29 21:25:34.358: INFO: (9) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 4.584985ms) Mar 29 21:25:34.358: INFO: (9) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 4.575203ms) Mar 29 21:25:34.358: INFO: (9) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 4.545609ms) Mar 29 21:25:34.358: INFO: (9) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 4.547228ms) Mar 29 21:25:34.358: INFO: (9) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 4.610553ms) Mar 29 21:25:34.358: INFO: (9) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test (200; 4.682438ms) Mar 29 21:25:34.360: INFO: (10) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 2.29296ms) Mar 29 21:25:34.360: INFO: (10) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 2.359669ms) Mar 29 21:25:34.361: INFO: (10) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 3.007495ms) Mar 29 21:25:34.361: INFO: (10) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 3.131933ms) Mar 29 21:25:34.361: INFO: (10) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 3.081254ms) Mar 29 21:25:34.361: INFO: (10) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 3.436201ms) Mar 29 21:25:34.362: INFO: (10) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 4.621973ms) Mar 29 21:25:34.363: INFO: (10) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 4.834624ms) Mar 29 21:25:34.363: INFO: (10) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 4.856887ms) Mar 29 21:25:34.363: INFO: (10) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 4.872119ms) Mar 29 21:25:34.363: INFO: (10) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 4.938104ms) Mar 29 21:25:34.363: INFO: (10) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 4.988909ms) Mar 29 21:25:34.363: INFO: (10) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 4.963979ms) Mar 29 21:25:34.363: INFO: (10) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test<... (200; 5.33053ms) Mar 29 21:25:34.363: INFO: (10) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 5.275096ms) Mar 29 21:25:34.367: INFO: (11) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 4.255379ms) Mar 29 21:25:34.368: INFO: (11) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 4.457802ms) Mar 29 21:25:34.368: INFO: (11) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 4.736107ms) Mar 29 21:25:34.368: INFO: (11) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test<... (200; 4.940586ms) Mar 29 21:25:34.368: INFO: (11) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 4.924342ms) Mar 29 21:25:34.369: INFO: (11) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 6.084811ms) Mar 29 21:25:34.369: INFO: (11) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 6.11056ms) Mar 29 21:25:34.370: INFO: (11) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 6.550812ms) Mar 29 21:25:34.370: INFO: (11) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 6.802096ms) Mar 29 21:25:34.370: INFO: (11) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 6.836664ms) Mar 29 21:25:34.370: INFO: (11) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 6.845985ms) Mar 29 21:25:34.370: INFO: (11) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 6.896206ms) Mar 29 21:25:34.374: INFO: (12) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 3.787858ms) Mar 29 21:25:34.374: INFO: (12) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 3.734779ms) Mar 29 21:25:34.374: INFO: (12) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 3.785801ms) Mar 29 21:25:34.374: INFO: (12) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 4.175398ms) Mar 29 21:25:34.375: INFO: (12) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 4.304521ms) Mar 29 21:25:34.375: INFO: (12) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 4.379946ms) Mar 29 21:25:34.375: INFO: (12) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 4.394711ms) Mar 29 21:25:34.375: INFO: (12) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 4.376901ms) Mar 29 21:25:34.375: INFO: (12) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 4.360513ms) Mar 29 21:25:34.375: INFO: (12) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 4.442269ms) Mar 29 21:25:34.375: INFO: (12) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 4.486591ms) Mar 29 21:25:34.375: INFO: (12) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 4.677119ms) Mar 29 21:25:34.375: INFO: (12) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test<... (200; 2.381874ms) Mar 29 21:25:34.377: INFO: (13) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: ... (200; 3.801764ms) Mar 29 21:25:34.379: INFO: (13) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 3.830092ms) Mar 29 21:25:34.379: INFO: (13) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 3.891665ms) Mar 29 21:25:34.380: INFO: (13) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 4.609618ms) Mar 29 21:25:34.380: INFO: (13) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 4.691263ms) Mar 29 21:25:34.380: INFO: (13) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 4.7583ms) Mar 29 21:25:34.380: INFO: (13) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 4.921483ms) Mar 29 21:25:34.380: INFO: (13) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 4.928212ms) Mar 29 21:25:34.380: INFO: (13) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 4.93042ms) Mar 29 21:25:34.382: INFO: (14) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 2.352606ms) Mar 29 21:25:34.388: INFO: (14) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 7.40018ms) Mar 29 21:25:34.388: INFO: (14) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 7.339323ms) Mar 29 21:25:34.388: INFO: (14) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 7.283898ms) Mar 29 21:25:34.388: INFO: (14) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 7.716046ms) Mar 29 21:25:34.388: INFO: (14) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 8.194748ms) Mar 29 21:25:34.388: INFO: (14) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: ... (200; 8.821472ms) Mar 29 21:25:34.389: INFO: (14) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 8.918499ms) Mar 29 21:25:34.389: INFO: (14) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 8.862268ms) Mar 29 21:25:34.389: INFO: (14) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 9.006364ms) Mar 29 21:25:34.389: INFO: (14) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 8.980883ms) Mar 29 21:25:34.389: INFO: (14) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 8.978219ms) Mar 29 21:25:34.389: INFO: (14) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 9.017375ms) Mar 29 21:25:34.392: INFO: (15) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 2.650294ms) Mar 29 21:25:34.392: INFO: (15) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test (200; 2.948953ms) Mar 29 21:25:34.392: INFO: (15) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 3.1732ms) Mar 29 21:25:34.392: INFO: (15) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 3.137324ms) Mar 29 21:25:34.393: INFO: (15) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 3.58386ms) Mar 29 21:25:34.393: INFO: (15) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 3.553852ms) Mar 29 21:25:34.393: INFO: (15) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 3.572402ms) Mar 29 21:25:34.393: INFO: (15) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 3.600146ms) Mar 29 21:25:34.393: INFO: (15) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 4.076619ms) Mar 29 21:25:34.394: INFO: (15) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 4.525142ms) Mar 29 21:25:34.394: INFO: (15) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 4.690435ms) Mar 29 21:25:34.394: INFO: (15) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 4.757347ms) Mar 29 21:25:34.394: INFO: (15) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 4.783884ms) Mar 29 21:25:34.394: INFO: (15) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 4.810983ms) Mar 29 21:25:34.394: INFO: (15) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 4.733109ms) Mar 29 21:25:34.397: INFO: (16) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 3.062476ms) Mar 29 21:25:34.397: INFO: (16) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 3.165361ms) Mar 29 21:25:34.397: INFO: (16) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 3.093316ms) Mar 29 21:25:34.397: INFO: (16) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 3.225736ms) Mar 29 21:25:34.399: INFO: (16) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:160/proxy/: foo (200; 4.753955ms) Mar 29 21:25:34.400: INFO: (16) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 5.475471ms) Mar 29 21:25:34.400: INFO: (16) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 5.546123ms) Mar 29 21:25:34.400: INFO: (16) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 5.695207ms) Mar 29 21:25:34.400: INFO: (16) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: ... (200; 3.895666ms) Mar 29 21:25:34.405: INFO: (17) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775/proxy/: test (200; 3.894458ms) Mar 29 21:25:34.405: INFO: (17) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 3.978794ms) Mar 29 21:25:34.405: INFO: (17) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test<... (200; 4.303198ms) Mar 29 21:25:34.406: INFO: (17) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 4.989546ms) Mar 29 21:25:34.406: INFO: (17) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 5.092525ms) Mar 29 21:25:34.406: INFO: (17) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 5.085471ms) Mar 29 21:25:34.406: INFO: (17) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 5.166631ms) Mar 29 21:25:34.406: INFO: (17) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 5.107414ms) Mar 29 21:25:34.406: INFO: (17) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 5.118035ms) Mar 29 21:25:34.411: INFO: (18) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test (200; 4.745923ms) Mar 29 21:25:34.411: INFO: (18) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 4.832501ms) Mar 29 21:25:34.411: INFO: (18) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 4.7107ms) Mar 29 21:25:34.411: INFO: (18) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 4.780297ms) Mar 29 21:25:34.411: INFO: (18) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 4.82987ms) Mar 29 21:25:34.411: INFO: (18) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 4.878519ms) Mar 29 21:25:34.411: INFO: (18) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 4.842952ms) Mar 29 21:25:34.411: INFO: (18) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 4.984129ms) Mar 29 21:25:34.411: INFO: (18) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 4.901394ms) Mar 29 21:25:34.412: INFO: (18) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 5.181673ms) Mar 29 21:25:34.413: INFO: (19) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:162/proxy/: bar (200; 1.819618ms) Mar 29 21:25:34.415: INFO: (19) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:160/proxy/: foo (200; 2.859926ms) Mar 29 21:25:34.415: INFO: (19) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:162/proxy/: bar (200; 3.000571ms) Mar 29 21:25:34.415: INFO: (19) /api/v1/namespaces/proxy-4333/pods/proxy-service-ls4rz-bt775:1080/proxy/: test<... (200; 3.168514ms) Mar 29 21:25:34.415: INFO: (19) /api/v1/namespaces/proxy-4333/pods/http:proxy-service-ls4rz-bt775:1080/proxy/: ... (200; 3.432229ms) Mar 29 21:25:34.415: INFO: (19) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:443/proxy/: test (200; 3.548781ms) Mar 29 21:25:34.415: INFO: (19) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:460/proxy/: tls baz (200; 3.52725ms) Mar 29 21:25:34.415: INFO: (19) /api/v1/namespaces/proxy-4333/pods/https:proxy-service-ls4rz-bt775:462/proxy/: tls qux (200; 3.503747ms) Mar 29 21:25:34.415: INFO: (19) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname2/proxy/: bar (200; 3.648688ms) Mar 29 21:25:34.416: INFO: (19) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname2/proxy/: bar (200; 4.604216ms) Mar 29 21:25:34.416: INFO: (19) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname1/proxy/: tls baz (200; 4.683328ms) Mar 29 21:25:34.416: INFO: (19) /api/v1/namespaces/proxy-4333/services/https:proxy-service-ls4rz:tlsportname2/proxy/: tls qux (200; 4.622807ms) Mar 29 21:25:34.416: INFO: (19) /api/v1/namespaces/proxy-4333/services/proxy-service-ls4rz:portname1/proxy/: foo (200; 4.652017ms) Mar 29 21:25:34.416: INFO: (19) /api/v1/namespaces/proxy-4333/services/http:proxy-service-ls4rz:portname1/proxy/: foo (200; 4.613636ms) STEP: deleting ReplicationController proxy-service-ls4rz in namespace proxy-4333, will wait for the garbage collector to delete the pods Mar 29 21:25:34.475: INFO: Deleting ReplicationController proxy-service-ls4rz took: 6.962181ms Mar 29 21:25:34.776: INFO: Terminating ReplicationController proxy-service-ls4rz pods took: 300.235509ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:25:39.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4333" for this suite. • [SLOW TEST:15.469 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":92,"skipped":1614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:25:39.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 29 21:25:39.710: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4431 /api/v1/namespaces/watch-4431/configmaps/e2e-watch-test-resource-version 5e334136-edee-429b-9f64-6a83bd816d5d 3788771 0 2020-03-29 21:25:39 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 29 21:25:39.710: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4431 /api/v1/namespaces/watch-4431/configmaps/e2e-watch-test-resource-version 5e334136-edee-429b-9f64-6a83bd816d5d 3788772 0 2020-03-29 21:25:39 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:25:39.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4431" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":93,"skipped":1637,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:25:39.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 29 21:25:39.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3813' Mar 29 21:25:39.901: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 29 21:25:39.901: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 Mar 29 21:25:41.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3813' Mar 29 21:25:42.025: INFO: stderr: "" Mar 29 21:25:42.025: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:25:42.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3813" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":94,"skipped":1656,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:25:42.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:25:42.664: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:25:44.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113942, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113942, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113942, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113942, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:25:47.750: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:25:47.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2609" for this suite. STEP: Destroying namespace "webhook-2609-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.908 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":95,"skipped":1675,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:25:47.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:25:48.121: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 34.362643ms) Mar 29 21:25:48.124: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.474171ms) Mar 29 21:25:48.128: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.449356ms) Mar 29 21:25:48.131: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.177166ms) Mar 29 21:25:48.134: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.011536ms) Mar 29 21:25:48.137: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.349699ms) Mar 29 21:25:48.140: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.542117ms) Mar 29 21:25:48.143: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.660092ms) Mar 29 21:25:48.145: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.653743ms) Mar 29 21:25:48.148: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.638913ms) Mar 29 21:25:48.151: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.950625ms) Mar 29 21:25:48.154: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.840406ms) Mar 29 21:25:48.157: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.694741ms) Mar 29 21:25:48.160: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.075348ms) Mar 29 21:25:48.163: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.026475ms) Mar 29 21:25:48.166: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.980876ms) Mar 29 21:25:48.169: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.521755ms) Mar 29 21:25:48.172: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.056909ms) Mar 29 21:25:48.175: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.849216ms) Mar 29 21:25:48.178: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.060493ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:25:48.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8631" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":96,"skipped":1682,"failed":0} ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:25:48.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:25:48.492: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.0694ms) Mar 29 21:25:48.495: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.445243ms) Mar 29 21:25:48.499: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.651027ms) Mar 29 21:25:48.503: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.325731ms) Mar 29 21:25:48.506: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.210328ms) Mar 29 21:25:48.509: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.594469ms) Mar 29 21:25:48.513: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.503394ms) Mar 29 21:25:48.528: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 15.430033ms) Mar 29 21:25:48.531: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.866686ms) Mar 29 21:25:48.534: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.960978ms) Mar 29 21:25:48.538: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.877031ms) Mar 29 21:25:48.542: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.797507ms) Mar 29 21:25:48.546: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.468283ms) Mar 29 21:25:48.549: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.770988ms) Mar 29 21:25:48.553: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.607685ms) Mar 29 21:25:48.557: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.401124ms) Mar 29 21:25:48.560: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.797842ms) Mar 29 21:25:48.564: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.555191ms) Mar 29 21:25:48.567: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.742866ms) Mar 29 21:25:48.570: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.047423ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:25:48.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4588" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":97,"skipped":1682,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:25:48.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:25:49.236: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:25:51.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113949, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113949, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113949, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721113949, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:25:54.313: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:26:06.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7693" for this suite. STEP: Destroying namespace "webhook-7693-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.934 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":98,"skipped":1683,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:26:06.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-09530014-54b2-4d79-84c6-2568aad97a5b in namespace container-probe-1776 Mar 29 21:26:10.630: INFO: Started pod liveness-09530014-54b2-4d79-84c6-2568aad97a5b in namespace container-probe-1776 STEP: checking the pod's current state and verifying that restartCount is present Mar 29 21:26:10.633: INFO: Initial restart count of pod liveness-09530014-54b2-4d79-84c6-2568aad97a5b is 0 Mar 29 21:26:28.754: INFO: Restart count of pod container-probe-1776/liveness-09530014-54b2-4d79-84c6-2568aad97a5b is now 1 (18.121366231s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:26:28.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1776" for this suite. • [SLOW TEST:22.339 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1684,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:26:28.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:26:45.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9926" for this suite. • [SLOW TEST:16.397 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":100,"skipped":1693,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:26:45.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:26:49.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8268" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1708,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:26:49.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:26:49.451: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8b436b5-ed3c-4531-a3fe-cb1f5fbd71cf" in namespace "projected-3367" to be "success or failure" Mar 29 21:26:49.455: INFO: Pod "downwardapi-volume-b8b436b5-ed3c-4531-a3fe-cb1f5fbd71cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.935659ms Mar 29 21:26:51.459: INFO: Pod "downwardapi-volume-b8b436b5-ed3c-4531-a3fe-cb1f5fbd71cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008015016s Mar 29 21:26:53.463: INFO: Pod "downwardapi-volume-b8b436b5-ed3c-4531-a3fe-cb1f5fbd71cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011345715s STEP: Saw pod success Mar 29 21:26:53.463: INFO: Pod "downwardapi-volume-b8b436b5-ed3c-4531-a3fe-cb1f5fbd71cf" satisfied condition "success or failure" Mar 29 21:26:53.466: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b8b436b5-ed3c-4531-a3fe-cb1f5fbd71cf container client-container: STEP: delete the pod Mar 29 21:26:53.498: INFO: Waiting for pod downwardapi-volume-b8b436b5-ed3c-4531-a3fe-cb1f5fbd71cf to disappear Mar 29 21:26:53.506: INFO: Pod downwardapi-volume-b8b436b5-ed3c-4531-a3fe-cb1f5fbd71cf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:26:53.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3367" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1708,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:26:53.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:26:53.590: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 29 21:26:53.609: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 29 21:26:58.611: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 29 21:26:58.611: INFO: Creating deployment "test-rolling-update-deployment" Mar 29 21:26:58.614: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 29 21:26:58.622: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 29 21:27:00.629: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 29 21:27:00.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114018, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114018, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114018, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114018, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:27:02.635: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 29 21:27:02.642: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5368 /apis/apps/v1/namespaces/deployment-5368/deployments/test-rolling-update-deployment dbe79fe1-cc5b-4594-a05a-a22d89d667e1 3789401 1 2020-03-29 21:26:58 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039cfc38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-29 21:26:58 +0000 UTC,LastTransitionTime:2020-03-29 21:26:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-29 21:27:01 +0000 UTC,LastTransitionTime:2020-03-29 21:26:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 29 21:27:02.645: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-5368 /apis/apps/v1/namespaces/deployment-5368/replicasets/test-rolling-update-deployment-67cf4f6444 ab88c11f-b057-4b77-b57b-28998ef34b55 3789390 1 2020-03-29 21:26:58 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment dbe79fe1-cc5b-4594-a05a-a22d89d667e1 0xc003a1dd37 0xc003a1dd38}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a1dda8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 29 21:27:02.645: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 29 21:27:02.645: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5368 /apis/apps/v1/namespaces/deployment-5368/replicasets/test-rolling-update-controller d4f4c509-5bb5-429f-ab53-70a4301e956d 3789400 2 2020-03-29 21:26:53 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment dbe79fe1-cc5b-4594-a05a-a22d89d667e1 0xc003a1dc4f 0xc003a1dc60}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003a1dcc8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 29 21:27:02.667: INFO: Pod "test-rolling-update-deployment-67cf4f6444-ptvkm" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-ptvkm test-rolling-update-deployment-67cf4f6444- deployment-5368 /api/v1/namespaces/deployment-5368/pods/test-rolling-update-deployment-67cf4f6444-ptvkm d22b1908-eb26-4f66-b72b-6b0674b9b265 3789389 0 2020-03-29 21:26:58 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 ab88c11f-b057-4b77-b57b-28998ef34b55 0xc003936257 0xc003936258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4ldb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4ldb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4ldb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:26:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:27:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:27:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:26:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.233,StartTime:2020-03-29 21:26:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:27:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://4ed81ca8741d548521b1a625ae4c0b757f13f4b7dca8634252770026f49c20fe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:27:02.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5368" for this suite. • [SLOW TEST:9.158 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":103,"skipped":1717,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:27:02.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-6774 STEP: creating replication controller nodeport-test in namespace services-6774 I0329 21:27:02.849739 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-6774, replica count: 2 I0329 21:27:05.900198 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0329 21:27:08.900470 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 29 21:27:08.900: INFO: Creating new exec pod Mar 29 21:27:13.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6774 execpodr27rb -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 29 21:27:14.159: INFO: stderr: "I0329 21:27:14.059526 880 log.go:172] (0xc000118370) (0xc000776000) Create stream\nI0329 21:27:14.059599 880 log.go:172] (0xc000118370) (0xc000776000) Stream added, broadcasting: 1\nI0329 21:27:14.062638 880 log.go:172] (0xc000118370) Reply frame received for 1\nI0329 21:27:14.062672 880 log.go:172] (0xc000118370) (0xc0007760a0) Create stream\nI0329 21:27:14.062681 880 log.go:172] (0xc000118370) (0xc0007760a0) Stream added, broadcasting: 3\nI0329 21:27:14.063649 880 log.go:172] (0xc000118370) Reply frame received for 3\nI0329 21:27:14.063678 880 log.go:172] (0xc000118370) (0xc000afa280) Create stream\nI0329 21:27:14.063687 880 log.go:172] (0xc000118370) (0xc000afa280) Stream added, broadcasting: 5\nI0329 21:27:14.064568 880 log.go:172] (0xc000118370) Reply frame received for 5\nI0329 21:27:14.149650 880 log.go:172] (0xc000118370) Data frame received for 5\nI0329 21:27:14.149684 880 log.go:172] (0xc000afa280) (5) Data frame handling\nI0329 21:27:14.149703 880 log.go:172] (0xc000afa280) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0329 21:27:14.150290 880 log.go:172] (0xc000118370) Data frame received for 5\nI0329 21:27:14.150344 880 log.go:172] (0xc000afa280) (5) Data frame handling\nI0329 21:27:14.150386 880 log.go:172] (0xc000afa280) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0329 21:27:14.150581 880 log.go:172] (0xc000118370) Data frame received for 5\nI0329 21:27:14.150595 880 log.go:172] (0xc000afa280) (5) Data frame handling\nI0329 21:27:14.150730 880 log.go:172] (0xc000118370) Data frame received for 3\nI0329 21:27:14.150766 880 log.go:172] (0xc0007760a0) (3) Data frame handling\nI0329 21:27:14.152283 880 log.go:172] (0xc000118370) Data frame received for 1\nI0329 21:27:14.152297 880 log.go:172] (0xc000776000) (1) Data frame handling\nI0329 21:27:14.152306 880 log.go:172] (0xc000776000) (1) Data frame sent\nI0329 21:27:14.152438 880 log.go:172] (0xc000118370) (0xc000776000) Stream removed, broadcasting: 1\nI0329 21:27:14.153372 880 log.go:172] (0xc000118370) Go away received\nI0329 21:27:14.154734 880 log.go:172] (0xc000118370) (0xc000776000) Stream removed, broadcasting: 1\nI0329 21:27:14.154755 880 log.go:172] (0xc000118370) (0xc0007760a0) Stream removed, broadcasting: 3\nI0329 21:27:14.154764 880 log.go:172] (0xc000118370) (0xc000afa280) Stream removed, broadcasting: 5\n" Mar 29 21:27:14.159: INFO: stdout: "" Mar 29 21:27:14.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6774 execpodr27rb -- /bin/sh -x -c nc -zv -t -w 2 10.105.255.134 80' Mar 29 21:27:14.360: INFO: stderr: "I0329 21:27:14.281107 900 log.go:172] (0xc00095a630) (0xc0002b3b80) Create stream\nI0329 21:27:14.281306 900 log.go:172] (0xc00095a630) (0xc0002b3b80) Stream added, broadcasting: 1\nI0329 21:27:14.283837 900 log.go:172] (0xc00095a630) Reply frame received for 1\nI0329 21:27:14.283884 900 log.go:172] (0xc00095a630) (0xc000922000) Create stream\nI0329 21:27:14.283900 900 log.go:172] (0xc00095a630) (0xc000922000) Stream added, broadcasting: 3\nI0329 21:27:14.285041 900 log.go:172] (0xc00095a630) Reply frame received for 3\nI0329 21:27:14.285068 900 log.go:172] (0xc00095a630) (0xc0002b3c20) Create stream\nI0329 21:27:14.285078 900 log.go:172] (0xc00095a630) (0xc0002b3c20) Stream added, broadcasting: 5\nI0329 21:27:14.286135 900 log.go:172] (0xc00095a630) Reply frame received for 5\nI0329 21:27:14.354816 900 log.go:172] (0xc00095a630) Data frame received for 5\nI0329 21:27:14.354853 900 log.go:172] (0xc0002b3c20) (5) Data frame handling\nI0329 21:27:14.354864 900 log.go:172] (0xc0002b3c20) (5) Data frame sent\nI0329 21:27:14.354872 900 log.go:172] (0xc00095a630) Data frame received for 5\nI0329 21:27:14.354880 900 log.go:172] (0xc0002b3c20) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.255.134 80\nConnection to 10.105.255.134 80 port [tcp/http] succeeded!\nI0329 21:27:14.354904 900 log.go:172] (0xc00095a630) Data frame received for 3\nI0329 21:27:14.354912 900 log.go:172] (0xc000922000) (3) Data frame handling\nI0329 21:27:14.356125 900 log.go:172] (0xc00095a630) Data frame received for 1\nI0329 21:27:14.356145 900 log.go:172] (0xc0002b3b80) (1) Data frame handling\nI0329 21:27:14.356154 900 log.go:172] (0xc0002b3b80) (1) Data frame sent\nI0329 21:27:14.356165 900 log.go:172] (0xc00095a630) (0xc0002b3b80) Stream removed, broadcasting: 1\nI0329 21:27:14.356242 900 log.go:172] (0xc00095a630) Go away received\nI0329 21:27:14.356417 900 log.go:172] (0xc00095a630) (0xc0002b3b80) Stream removed, broadcasting: 1\nI0329 21:27:14.356430 900 log.go:172] (0xc00095a630) (0xc000922000) Stream removed, broadcasting: 3\nI0329 21:27:14.356436 900 log.go:172] (0xc00095a630) (0xc0002b3c20) Stream removed, broadcasting: 5\n" Mar 29 21:27:14.360: INFO: stdout: "" Mar 29 21:27:14.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6774 execpodr27rb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31677' Mar 29 21:27:14.572: INFO: stderr: "I0329 21:27:14.491958 920 log.go:172] (0xc0000f5550) (0xc00075e1e0) Create stream\nI0329 21:27:14.492019 920 log.go:172] (0xc0000f5550) (0xc00075e1e0) Stream added, broadcasting: 1\nI0329 21:27:14.494727 920 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0329 21:27:14.494763 920 log.go:172] (0xc0000f5550) (0xc0005ca780) Create stream\nI0329 21:27:14.494775 920 log.go:172] (0xc0000f5550) (0xc0005ca780) Stream added, broadcasting: 3\nI0329 21:27:14.495660 920 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0329 21:27:14.495699 920 log.go:172] (0xc0000f5550) (0xc0005eda40) Create stream\nI0329 21:27:14.495713 920 log.go:172] (0xc0000f5550) (0xc0005eda40) Stream added, broadcasting: 5\nI0329 21:27:14.496739 920 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0329 21:27:14.566323 920 log.go:172] (0xc0000f5550) Data frame received for 3\nI0329 21:27:14.566356 920 log.go:172] (0xc0005ca780) (3) Data frame handling\nI0329 21:27:14.566378 920 log.go:172] (0xc0000f5550) Data frame received for 5\nI0329 21:27:14.566388 920 log.go:172] (0xc0005eda40) (5) Data frame handling\nI0329 21:27:14.566399 920 log.go:172] (0xc0005eda40) (5) Data frame sent\nI0329 21:27:14.566408 920 log.go:172] (0xc0000f5550) Data frame received for 5\nI0329 21:27:14.566416 920 log.go:172] (0xc0005eda40) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31677\nConnection to 172.17.0.10 31677 port [tcp/31677] succeeded!\nI0329 21:27:14.567545 920 log.go:172] (0xc0000f5550) Data frame received for 1\nI0329 21:27:14.567586 920 log.go:172] (0xc00075e1e0) (1) Data frame handling\nI0329 21:27:14.567613 920 log.go:172] (0xc00075e1e0) (1) Data frame sent\nI0329 21:27:14.567642 920 log.go:172] (0xc0000f5550) (0xc00075e1e0) Stream removed, broadcasting: 1\nI0329 21:27:14.567667 920 log.go:172] (0xc0000f5550) Go away received\nI0329 21:27:14.568047 920 log.go:172] (0xc0000f5550) (0xc00075e1e0) Stream removed, broadcasting: 1\nI0329 21:27:14.568070 920 log.go:172] (0xc0000f5550) (0xc0005ca780) Stream removed, broadcasting: 3\nI0329 21:27:14.568082 920 log.go:172] (0xc0000f5550) (0xc0005eda40) Stream removed, broadcasting: 5\n" Mar 29 21:27:14.572: INFO: stdout: "" Mar 29 21:27:14.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6774 execpodr27rb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31677' Mar 29 21:27:14.759: INFO: stderr: "I0329 21:27:14.684130 942 log.go:172] (0xc000a580b0) (0xc000590d20) Create stream\nI0329 21:27:14.684195 942 log.go:172] (0xc000a580b0) (0xc000590d20) Stream added, broadcasting: 1\nI0329 21:27:14.686660 942 log.go:172] (0xc000a580b0) Reply frame received for 1\nI0329 21:27:14.686695 942 log.go:172] (0xc000a580b0) (0xc0009ae000) Create stream\nI0329 21:27:14.686713 942 log.go:172] (0xc000a580b0) (0xc0009ae000) Stream added, broadcasting: 3\nI0329 21:27:14.687636 942 log.go:172] (0xc000a580b0) Reply frame received for 3\nI0329 21:27:14.687678 942 log.go:172] (0xc000a580b0) (0xc00069bb80) Create stream\nI0329 21:27:14.687690 942 log.go:172] (0xc000a580b0) (0xc00069bb80) Stream added, broadcasting: 5\nI0329 21:27:14.688422 942 log.go:172] (0xc000a580b0) Reply frame received for 5\nI0329 21:27:14.753744 942 log.go:172] (0xc000a580b0) Data frame received for 5\nI0329 21:27:14.753790 942 log.go:172] (0xc00069bb80) (5) Data frame handling\nI0329 21:27:14.753807 942 log.go:172] (0xc00069bb80) (5) Data frame sent\nI0329 21:27:14.753818 942 log.go:172] (0xc000a580b0) Data frame received for 5\nI0329 21:27:14.753827 942 log.go:172] (0xc00069bb80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31677\nConnection to 172.17.0.8 31677 port [tcp/31677] succeeded!\nI0329 21:27:14.753854 942 log.go:172] (0xc000a580b0) Data frame received for 3\nI0329 21:27:14.753865 942 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0329 21:27:14.755247 942 log.go:172] (0xc000a580b0) Data frame received for 1\nI0329 21:27:14.755277 942 log.go:172] (0xc000590d20) (1) Data frame handling\nI0329 21:27:14.755291 942 log.go:172] (0xc000590d20) (1) Data frame sent\nI0329 21:27:14.755302 942 log.go:172] (0xc000a580b0) (0xc000590d20) Stream removed, broadcasting: 1\nI0329 21:27:14.755332 942 log.go:172] (0xc000a580b0) Go away received\nI0329 21:27:14.755659 942 log.go:172] (0xc000a580b0) (0xc000590d20) Stream removed, broadcasting: 1\nI0329 21:27:14.755674 942 log.go:172] (0xc000a580b0) (0xc0009ae000) Stream removed, broadcasting: 3\nI0329 21:27:14.755682 942 log.go:172] (0xc000a580b0) (0xc00069bb80) Stream removed, broadcasting: 5\n" Mar 29 21:27:14.759: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:27:14.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6774" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.094 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":104,"skipped":1735,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:27:14.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:27:28.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1215" for this suite. • [SLOW TEST:13.242 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":105,"skipped":1741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:27:28.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 29 21:27:32.113: INFO: Pod pod-hostip-94df1dca-7eae-41a3-b14a-3d573f51a504 has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:27:32.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9997" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1774,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:27:32.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:27:32.162: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 29 21:27:34.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5687 create -f -' Mar 29 21:27:37.472: INFO: stderr: "" Mar 29 21:27:37.472: INFO: stdout: "e2e-test-crd-publish-openapi-4836-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 29 21:27:37.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5687 delete e2e-test-crd-publish-openapi-4836-crds test-cr' Mar 29 21:27:37.606: INFO: stderr: "" Mar 29 21:27:37.606: INFO: stdout: "e2e-test-crd-publish-openapi-4836-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 29 21:27:37.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5687 apply -f -' Mar 29 21:27:38.124: INFO: stderr: "" Mar 29 21:27:38.124: INFO: stdout: "e2e-test-crd-publish-openapi-4836-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 29 21:27:38.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5687 delete e2e-test-crd-publish-openapi-4836-crds test-cr' Mar 29 21:27:38.240: INFO: stderr: "" Mar 29 21:27:38.240: INFO: stdout: "e2e-test-crd-publish-openapi-4836-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 29 21:27:38.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4836-crds' Mar 29 21:27:38.554: INFO: stderr: "" Mar 29 21:27:38.554: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4836-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:27:41.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5687" for this suite. • [SLOW TEST:9.352 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":107,"skipped":1787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:27:41.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:27:47.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8589" for this suite. STEP: Destroying namespace "nsdeletetest-7764" for this suite. Mar 29 21:27:47.766: INFO: Namespace nsdeletetest-7764 was already deleted STEP: Destroying namespace "nsdeletetest-2569" for this suite. • [SLOW TEST:6.297 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":108,"skipped":1820,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:27:47.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9767 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9767 STEP: creating replication controller externalsvc in namespace services-9767 I0329 21:27:47.956053 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9767, replica count: 2 I0329 21:27:51.006476 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0329 21:27:54.006684 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 29 21:27:54.058: INFO: Creating new exec pod Mar 29 21:27:58.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9767 execpodjlxd7 -- /bin/sh -x -c nslookup nodeport-service' Mar 29 21:27:58.284: INFO: stderr: "I0329 21:27:58.202550 1073 log.go:172] (0xc000104c60) (0xc000609f40) Create stream\nI0329 21:27:58.202603 1073 log.go:172] (0xc000104c60) (0xc000609f40) Stream added, broadcasting: 1\nI0329 21:27:58.204945 1073 log.go:172] (0xc000104c60) Reply frame received for 1\nI0329 21:27:58.204985 1073 log.go:172] (0xc000104c60) (0xc000512820) Create stream\nI0329 21:27:58.204996 1073 log.go:172] (0xc000104c60) (0xc000512820) Stream added, broadcasting: 3\nI0329 21:27:58.206099 1073 log.go:172] (0xc000104c60) Reply frame received for 3\nI0329 21:27:58.206126 1073 log.go:172] (0xc000104c60) (0xc0009cc000) Create stream\nI0329 21:27:58.206135 1073 log.go:172] (0xc000104c60) (0xc0009cc000) Stream added, broadcasting: 5\nI0329 21:27:58.207033 1073 log.go:172] (0xc000104c60) Reply frame received for 5\nI0329 21:27:58.266966 1073 log.go:172] (0xc000104c60) Data frame received for 5\nI0329 21:27:58.267009 1073 log.go:172] (0xc0009cc000) (5) Data frame handling\nI0329 21:27:58.267049 1073 log.go:172] (0xc0009cc000) (5) Data frame sent\n+ nslookup nodeport-service\nI0329 21:27:58.277306 1073 log.go:172] (0xc000104c60) Data frame received for 3\nI0329 21:27:58.277351 1073 log.go:172] (0xc000512820) (3) Data frame handling\nI0329 21:27:58.277387 1073 log.go:172] (0xc000512820) (3) Data frame sent\nI0329 21:27:58.278403 1073 log.go:172] (0xc000104c60) Data frame received for 3\nI0329 21:27:58.278422 1073 log.go:172] (0xc000512820) (3) Data frame handling\nI0329 21:27:58.278438 1073 log.go:172] (0xc000512820) (3) Data frame sent\nI0329 21:27:58.278762 1073 log.go:172] (0xc000104c60) Data frame received for 3\nI0329 21:27:58.278789 1073 log.go:172] (0xc000512820) (3) Data frame handling\nI0329 21:27:58.278890 1073 log.go:172] (0xc000104c60) Data frame received for 5\nI0329 21:27:58.278928 1073 log.go:172] (0xc0009cc000) (5) Data frame handling\nI0329 21:27:58.280493 1073 log.go:172] (0xc000104c60) Data frame received for 1\nI0329 21:27:58.280519 1073 log.go:172] (0xc000609f40) (1) Data frame handling\nI0329 21:27:58.280543 1073 log.go:172] (0xc000609f40) (1) Data frame sent\nI0329 21:27:58.280563 1073 log.go:172] (0xc000104c60) (0xc000609f40) Stream removed, broadcasting: 1\nI0329 21:27:58.280931 1073 log.go:172] (0xc000104c60) Go away received\nI0329 21:27:58.280997 1073 log.go:172] (0xc000104c60) (0xc000609f40) Stream removed, broadcasting: 1\nI0329 21:27:58.281021 1073 log.go:172] (0xc000104c60) (0xc000512820) Stream removed, broadcasting: 3\nI0329 21:27:58.281038 1073 log.go:172] (0xc000104c60) (0xc0009cc000) Stream removed, broadcasting: 5\n" Mar 29 21:27:58.284: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9767.svc.cluster.local\tcanonical name = externalsvc.services-9767.svc.cluster.local.\nName:\texternalsvc.services-9767.svc.cluster.local\nAddress: 10.106.55.186\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9767, will wait for the garbage collector to delete the pods Mar 29 21:27:58.345: INFO: Deleting ReplicationController externalsvc took: 6.247298ms Mar 29 21:27:58.645: INFO: Terminating ReplicationController externalsvc pods took: 300.34716ms Mar 29 21:28:09.614: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:28:09.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9767" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:21.881 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":109,"skipped":1853,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:28:09.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:28:09.726: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 29 21:28:11.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8144 create -f -' Mar 29 21:28:14.480: INFO: stderr: "" Mar 29 21:28:14.480: INFO: stdout: "e2e-test-crd-publish-openapi-145-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 29 21:28:14.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8144 delete e2e-test-crd-publish-openapi-145-crds test-cr' Mar 29 21:28:14.579: INFO: stderr: "" Mar 29 21:28:14.579: INFO: stdout: "e2e-test-crd-publish-openapi-145-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 29 21:28:14.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8144 apply -f -' Mar 29 21:28:14.846: INFO: stderr: "" Mar 29 21:28:14.846: INFO: stdout: "e2e-test-crd-publish-openapi-145-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 29 21:28:14.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8144 delete e2e-test-crd-publish-openapi-145-crds test-cr' Mar 29 21:28:14.946: INFO: stderr: "" Mar 29 21:28:14.946: INFO: stdout: "e2e-test-crd-publish-openapi-145-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 29 21:28:14.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-145-crds' Mar 29 21:28:15.526: INFO: stderr: "" Mar 29 21:28:15.526: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-145-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:28:18.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8144" for this suite. • [SLOW TEST:8.744 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":110,"skipped":1860,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:28:18.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 29 21:28:18.563: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 29 21:28:18.625: INFO: Waiting for terminating namespaces to be deleted... Mar 29 21:28:18.628: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 29 21:28:18.633: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:28:18.634: INFO: Container kindnet-cni ready: true, restart count 0 Mar 29 21:28:18.634: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:28:18.634: INFO: Container kube-proxy ready: true, restart count 0 Mar 29 21:28:18.634: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 29 21:28:18.638: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:28:18.638: INFO: Container kube-proxy ready: true, restart count 0 Mar 29 21:28:18.638: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Mar 29 21:28:18.638: INFO: Container kube-hunter ready: false, restart count 0 Mar 29 21:28:18.638: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:28:18.638: INFO: Container kindnet-cni ready: true, restart count 0 Mar 29 21:28:18.638: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Mar 29 21:28:18.638: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d15953d1-1409-4853-8c9f-61ed98102684 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-d15953d1-1409-4853-8c9f-61ed98102684 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-d15953d1-1409-4853-8c9f-61ed98102684 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:28:34.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9572" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.415 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":111,"skipped":1861,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:28:34.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0329 21:28:35.902088 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 29 21:28:35.902: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:28:35.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6111" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":112,"skipped":1862,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:28:35.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 29 21:28:35.980: INFO: namespace kubectl-1994 Mar 29 21:28:35.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1994' Mar 29 21:28:36.348: INFO: stderr: "" Mar 29 21:28:36.348: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 29 21:28:37.355: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 21:28:37.355: INFO: Found 0 / 1 Mar 29 21:28:38.352: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 21:28:38.352: INFO: Found 0 / 1 Mar 29 21:28:39.353: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 21:28:39.353: INFO: Found 1 / 1 Mar 29 21:28:39.353: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 29 21:28:39.378: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 21:28:39.378: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 29 21:28:39.378: INFO: wait on agnhost-master startup in kubectl-1994 Mar 29 21:28:39.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-q8mtg agnhost-master --namespace=kubectl-1994' Mar 29 21:28:39.493: INFO: stderr: "" Mar 29 21:28:39.493: INFO: stdout: "Paused\n" STEP: exposing RC Mar 29 21:28:39.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1994' Mar 29 21:28:39.632: INFO: stderr: "" Mar 29 21:28:39.632: INFO: stdout: "service/rm2 exposed\n" Mar 29 21:28:39.637: INFO: Service rm2 in namespace kubectl-1994 found. STEP: exposing service Mar 29 21:28:41.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1994' Mar 29 21:28:41.895: INFO: stderr: "" Mar 29 21:28:41.895: INFO: stdout: "service/rm3 exposed\n" Mar 29 21:28:41.900: INFO: Service rm3 in namespace kubectl-1994 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:28:43.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1994" for this suite. • [SLOW TEST:8.005 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":113,"skipped":1879,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:28:43.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-e442e175-a96c-494e-a2ec-7eb39b13ea96 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:28:43.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2552" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":114,"skipped":1905,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:28:43.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 29 21:28:44.023: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 29 21:28:44.045: INFO: Waiting for terminating namespaces to be deleted... Mar 29 21:28:44.048: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 29 21:28:44.053: INFO: agnhost-master-q8mtg from kubectl-1994 started at 2020-03-29 21:28:36 +0000 UTC (1 container statuses recorded) Mar 29 21:28:44.053: INFO: Container agnhost-master ready: true, restart count 0 Mar 29 21:28:44.053: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:28:44.053: INFO: Container kindnet-cni ready: true, restart count 0 Mar 29 21:28:44.053: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:28:44.053: INFO: Container kube-proxy ready: true, restart count 0 Mar 29 21:28:44.053: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 29 21:28:44.059: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:28:44.059: INFO: Container kube-proxy ready: true, restart count 0 Mar 29 21:28:44.059: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Mar 29 21:28:44.059: INFO: Container kube-hunter ready: false, restart count 0 Mar 29 21:28:44.059: INFO: pod3 from sched-pred-9572 started at 2020-03-29 21:28:30 +0000 UTC (1 container statuses recorded) Mar 29 21:28:44.059: INFO: Container pod3 ready: false, restart count 0 Mar 29 21:28:44.059: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 21:28:44.059: INFO: Container kindnet-cni ready: true, restart count 0 Mar 29 21:28:44.059: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Mar 29 21:28:44.059: INFO: Container kube-bench ready: false, restart count 0 Mar 29 21:28:44.059: INFO: pod1 from sched-pred-9572 started at 2020-03-29 21:28:22 +0000 UTC (1 container statuses recorded) Mar 29 21:28:44.059: INFO: Container pod1 ready: false, restart count 0 Mar 29 21:28:44.059: INFO: pod2 from sched-pred-9572 started at 2020-03-29 21:28:26 +0000 UTC (1 container statuses recorded) Mar 29 21:28:44.059: INFO: Container pod2 ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-97d00092-9f93-42c6-aa40-5b9ea8fdb357 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-97d00092-9f93-42c6-aa40-5b9ea8fdb357 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-97d00092-9f93-42c6-aa40-5b9ea8fdb357 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:28:52.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7394" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.302 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":115,"skipped":1905,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:28:52.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 29 21:28:52.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5523' Mar 29 21:28:52.453: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 29 21:28:52.453: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 29 21:28:54.464: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-pwh9m] Mar 29 21:28:54.464: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-pwh9m" in namespace "kubectl-5523" to be "running and ready" Mar 29 21:28:54.468: INFO: Pod "e2e-test-httpd-rc-pwh9m": Phase="Pending", Reason="", readiness=false. Elapsed: 3.253006ms Mar 29 21:28:56.471: INFO: Pod "e2e-test-httpd-rc-pwh9m": Phase="Running", Reason="", readiness=true. Elapsed: 2.007046309s Mar 29 21:28:56.471: INFO: Pod "e2e-test-httpd-rc-pwh9m" satisfied condition "running and ready" Mar 29 21:28:56.471: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-pwh9m] Mar 29 21:28:56.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5523' Mar 29 21:28:56.589: INFO: stderr: "" Mar 29 21:28:56.589: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.25. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.25. Set the 'ServerName' directive globally to suppress this message\n[Sun Mar 29 21:28:54.789519 2020] [mpm_event:notice] [pid 1:tid 140076192574312] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Mar 29 21:28:54.789582 2020] [core:notice] [pid 1:tid 140076192574312] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637 Mar 29 21:28:56.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5523' Mar 29 21:28:56.686: INFO: stderr: "" Mar 29 21:28:56.686: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:28:56.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5523" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":116,"skipped":1916,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:28:56.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-c88bbdaf-d7a2-40eb-bd60-f40ee27fd466 STEP: Creating a pod to test consume configMaps Mar 29 21:28:56.917: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-541c2664-b17f-412a-a48d-04b12a31b618" in namespace "projected-6480" to be "success or failure" Mar 29 21:28:56.956: INFO: Pod "pod-projected-configmaps-541c2664-b17f-412a-a48d-04b12a31b618": Phase="Pending", Reason="", readiness=false. Elapsed: 38.495765ms Mar 29 21:28:58.963: INFO: Pod "pod-projected-configmaps-541c2664-b17f-412a-a48d-04b12a31b618": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046150188s Mar 29 21:29:00.967: INFO: Pod "pod-projected-configmaps-541c2664-b17f-412a-a48d-04b12a31b618": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049934022s STEP: Saw pod success Mar 29 21:29:00.967: INFO: Pod "pod-projected-configmaps-541c2664-b17f-412a-a48d-04b12a31b618" satisfied condition "success or failure" Mar 29 21:29:00.969: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-541c2664-b17f-412a-a48d-04b12a31b618 container projected-configmap-volume-test: STEP: delete the pod Mar 29 21:29:00.997: INFO: Waiting for pod pod-projected-configmaps-541c2664-b17f-412a-a48d-04b12a31b618 to disappear Mar 29 21:29:01.028: INFO: Pod pod-projected-configmaps-541c2664-b17f-412a-a48d-04b12a31b618 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:29:01.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6480" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1926,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:29:01.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-752dfd65-d74b-4be4-9615-ac939cac1d31 STEP: Creating a pod to test consume secrets Mar 29 21:29:01.097: INFO: Waiting up to 5m0s for pod "pod-secrets-936c61cc-ad50-49f3-b60e-71217ea16cc5" in namespace "secrets-4420" to be "success or failure" Mar 29 21:29:01.116: INFO: Pod "pod-secrets-936c61cc-ad50-49f3-b60e-71217ea16cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.011203ms Mar 29 21:29:03.120: INFO: Pod "pod-secrets-936c61cc-ad50-49f3-b60e-71217ea16cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022211095s Mar 29 21:29:05.123: INFO: Pod "pod-secrets-936c61cc-ad50-49f3-b60e-71217ea16cc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025928489s STEP: Saw pod success Mar 29 21:29:05.124: INFO: Pod "pod-secrets-936c61cc-ad50-49f3-b60e-71217ea16cc5" satisfied condition "success or failure" Mar 29 21:29:05.126: INFO: Trying to get logs from node jerma-worker pod pod-secrets-936c61cc-ad50-49f3-b60e-71217ea16cc5 container secret-volume-test: STEP: delete the pod Mar 29 21:29:05.149: INFO: Waiting for pod pod-secrets-936c61cc-ad50-49f3-b60e-71217ea16cc5 to disappear Mar 29 21:29:05.174: INFO: Pod pod-secrets-936c61cc-ad50-49f3-b60e-71217ea16cc5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:29:05.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4420" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1936,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:29:05.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:29:05.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1659" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":119,"skipped":1952,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:29:05.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 29 21:29:08.435: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:29:08.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5375" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:29:08.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0329 21:29:39.121705 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 29 21:29:39.121: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:29:39.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8570" for this suite. • [SLOW TEST:30.644 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":121,"skipped":2003,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:29:39.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:29:39.219: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 29 21:29:44.232: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 29 21:29:44.232: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 29 21:29:46.236: INFO: Creating deployment "test-rollover-deployment" Mar 29 21:29:46.244: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 29 21:29:48.251: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 29 21:29:48.257: INFO: Ensure that both replica sets have 1 created replica Mar 29 21:29:48.263: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 29 21:29:48.269: INFO: Updating deployment test-rollover-deployment Mar 29 21:29:48.269: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 29 21:29:50.280: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 29 21:29:50.286: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 29 21:29:50.291: INFO: all replica sets need to contain the pod-template-hash label Mar 29 21:29:50.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114188, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:29:52.300: INFO: all replica sets need to contain the pod-template-hash label Mar 29 21:29:52.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114191, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:29:54.300: INFO: all replica sets need to contain the pod-template-hash label Mar 29 21:29:54.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114191, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:29:56.299: INFO: all replica sets need to contain the pod-template-hash label Mar 29 21:29:56.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114191, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:29:58.299: INFO: all replica sets need to contain the pod-template-hash label Mar 29 21:29:58.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114191, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:30:00.299: INFO: all replica sets need to contain the pod-template-hash label Mar 29 21:30:00.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114191, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114186, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:30:02.299: INFO: Mar 29 21:30:02.299: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 29 21:30:02.307: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5288 /apis/apps/v1/namespaces/deployment-5288/deployments/test-rollover-deployment 05d628e1-cf53-44b2-a6a2-a1088f7330a8 3790719 2 2020-03-29 21:29:46 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e55188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-29 21:29:46 +0000 UTC,LastTransitionTime:2020-03-29 21:29:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-29 21:30:01 +0000 UTC,LastTransitionTime:2020-03-29 21:29:46 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 29 21:30:02.310: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-5288 /apis/apps/v1/namespaces/deployment-5288/replicasets/test-rollover-deployment-574d6dfbff 1cfe0aaa-382f-4322-aa22-9c059719be05 3790708 2 2020-03-29 21:29:48 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 05d628e1-cf53-44b2-a6a2-a1088f7330a8 0xc003dfab77 0xc003dfab78}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003dfabf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 29 21:30:02.310: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 29 21:30:02.310: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5288 /apis/apps/v1/namespaces/deployment-5288/replicasets/test-rollover-controller a201eea2-f6e0-47cf-9e34-931665211f3c 3790717 2 2020-03-29 21:29:39 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 05d628e1-cf53-44b2-a6a2-a1088f7330a8 0xc003dfaa77 0xc003dfaa78}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003dfaae8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 29 21:30:02.310: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5288 /apis/apps/v1/namespaces/deployment-5288/replicasets/test-rollover-deployment-f6c94f66c 26694250-9e69-4402-a988-3fd5f760621f 3790658 2 2020-03-29 21:29:46 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 05d628e1-cf53-44b2-a6a2-a1088f7330a8 0xc003dfac80 0xc003dfac81}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003dfad18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 29 21:30:02.314: INFO: Pod "test-rollover-deployment-574d6dfbff-n2zzf" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-n2zzf test-rollover-deployment-574d6dfbff- deployment-5288 /api/v1/namespaces/deployment-5288/pods/test-rollover-deployment-574d6dfbff-n2zzf a3df5c54-8310-4088-9646-e065c71b8a88 3790675 0 2020-03-29 21:29:48 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 1cfe0aaa-382f-4322-aa22-9c059719be05 0xc003dfb367 0xc003dfb368}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bcqgz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bcqgz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bcqgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:29:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:29:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:29:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:29:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.28,StartTime:2020-03-29 21:29:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:29:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://7643a0f2297e8094da01e1b23c2c4f29e16f73ed6c36020b8694385019f66426,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:02.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5288" for this suite. • [SLOW TEST:23.191 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":122,"skipped":2046,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:02.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:30:02.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8d1c491-c099-4532-8210-2059c5ac1a48" in namespace "downward-api-6418" to be "success or failure" Mar 29 21:30:02.382: INFO: Pod "downwardapi-volume-c8d1c491-c099-4532-8210-2059c5ac1a48": Phase="Pending", Reason="", readiness=false. Elapsed: 3.772192ms Mar 29 21:30:04.386: INFO: Pod "downwardapi-volume-c8d1c491-c099-4532-8210-2059c5ac1a48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007732308s Mar 29 21:30:06.412: INFO: Pod "downwardapi-volume-c8d1c491-c099-4532-8210-2059c5ac1a48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033001757s STEP: Saw pod success Mar 29 21:30:06.412: INFO: Pod "downwardapi-volume-c8d1c491-c099-4532-8210-2059c5ac1a48" satisfied condition "success or failure" Mar 29 21:30:06.414: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c8d1c491-c099-4532-8210-2059c5ac1a48 container client-container: STEP: delete the pod Mar 29 21:30:06.446: INFO: Waiting for pod downwardapi-volume-c8d1c491-c099-4532-8210-2059c5ac1a48 to disappear Mar 29 21:30:06.460: INFO: Pod downwardapi-volume-c8d1c491-c099-4532-8210-2059c5ac1a48 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:06.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6418" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2046,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:06.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 29 21:30:11.087: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a0074a0f-b189-4d7a-9d96-d80be297abc9" Mar 29 21:30:11.087: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a0074a0f-b189-4d7a-9d96-d80be297abc9" in namespace "pods-6300" to be "terminated due to deadline exceeded" Mar 29 21:30:11.090: INFO: Pod "pod-update-activedeadlineseconds-a0074a0f-b189-4d7a-9d96-d80be297abc9": Phase="Running", Reason="", readiness=true. Elapsed: 3.242826ms Mar 29 21:30:13.103: INFO: Pod "pod-update-activedeadlineseconds-a0074a0f-b189-4d7a-9d96-d80be297abc9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.015444082s Mar 29 21:30:13.103: INFO: Pod "pod-update-activedeadlineseconds-a0074a0f-b189-4d7a-9d96-d80be297abc9" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:13.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6300" for this suite. • [SLOW TEST:6.669 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2056,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:13.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:30:13.659: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:30:15.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114213, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114213, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114213, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114213, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:30:18.695: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:18.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9590" for this suite. STEP: Destroying namespace "webhook-9590-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.673 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":125,"skipped":2070,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:18.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 29 21:30:18.942: INFO: Waiting up to 5m0s for pod "pod-bc0d1421-502c-4a2d-bdbc-a146b0bd0584" in namespace "emptydir-9436" to be "success or failure" Mar 29 21:30:18.946: INFO: Pod "pod-bc0d1421-502c-4a2d-bdbc-a146b0bd0584": Phase="Pending", Reason="", readiness=false. Elapsed: 3.139232ms Mar 29 21:30:21.018: INFO: Pod "pod-bc0d1421-502c-4a2d-bdbc-a146b0bd0584": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075367964s Mar 29 21:30:23.022: INFO: Pod "pod-bc0d1421-502c-4a2d-bdbc-a146b0bd0584": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079800523s STEP: Saw pod success Mar 29 21:30:23.022: INFO: Pod "pod-bc0d1421-502c-4a2d-bdbc-a146b0bd0584" satisfied condition "success or failure" Mar 29 21:30:23.026: INFO: Trying to get logs from node jerma-worker pod pod-bc0d1421-502c-4a2d-bdbc-a146b0bd0584 container test-container: STEP: delete the pod Mar 29 21:30:23.056: INFO: Waiting for pod pod-bc0d1421-502c-4a2d-bdbc-a146b0bd0584 to disappear Mar 29 21:30:23.063: INFO: Pod pod-bc0d1421-502c-4a2d-bdbc-a146b0bd0584 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:23.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9436" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2087,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:23.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:28.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9296" for this suite. • [SLOW TEST:5.227 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":127,"skipped":2100,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:28.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 29 21:30:28.378: INFO: >>> kubeConfig: /root/.kube/config Mar 29 21:30:31.281: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:41.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6513" for this suite. • [SLOW TEST:13.401 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":128,"skipped":2112,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:41.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 29 21:30:41.789: INFO: Waiting up to 5m0s for pod "downward-api-5b9a67f7-e8a0-4b7c-b542-dd718f6d9f8e" in namespace "downward-api-1198" to be "success or failure" Mar 29 21:30:41.806: INFO: Pod "downward-api-5b9a67f7-e8a0-4b7c-b542-dd718f6d9f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.669367ms Mar 29 21:30:43.810: INFO: Pod "downward-api-5b9a67f7-e8a0-4b7c-b542-dd718f6d9f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020895259s Mar 29 21:30:45.815: INFO: Pod "downward-api-5b9a67f7-e8a0-4b7c-b542-dd718f6d9f8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025684758s STEP: Saw pod success Mar 29 21:30:45.815: INFO: Pod "downward-api-5b9a67f7-e8a0-4b7c-b542-dd718f6d9f8e" satisfied condition "success or failure" Mar 29 21:30:45.818: INFO: Trying to get logs from node jerma-worker2 pod downward-api-5b9a67f7-e8a0-4b7c-b542-dd718f6d9f8e container dapi-container: STEP: delete the pod Mar 29 21:30:45.859: INFO: Waiting for pod downward-api-5b9a67f7-e8a0-4b7c-b542-dd718f6d9f8e to disappear Mar 29 21:30:45.864: INFO: Pod downward-api-5b9a67f7-e8a0-4b7c-b542-dd718f6d9f8e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:45.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1198" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2130,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:45.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:30:45.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-097196c1-c1bf-46d4-a726-a93e1e02ab80" in namespace "projected-7353" to be "success or failure" Mar 29 21:30:45.936: INFO: Pod "downwardapi-volume-097196c1-c1bf-46d4-a726-a93e1e02ab80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.907921ms Mar 29 21:30:47.941: INFO: Pod "downwardapi-volume-097196c1-c1bf-46d4-a726-a93e1e02ab80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010024443s Mar 29 21:30:49.946: INFO: Pod "downwardapi-volume-097196c1-c1bf-46d4-a726-a93e1e02ab80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01446138s STEP: Saw pod success Mar 29 21:30:49.946: INFO: Pod "downwardapi-volume-097196c1-c1bf-46d4-a726-a93e1e02ab80" satisfied condition "success or failure" Mar 29 21:30:49.949: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-097196c1-c1bf-46d4-a726-a93e1e02ab80 container client-container: STEP: delete the pod Mar 29 21:30:49.979: INFO: Waiting for pod downwardapi-volume-097196c1-c1bf-46d4-a726-a93e1e02ab80 to disappear Mar 29 21:30:49.990: INFO: Pod downwardapi-volume-097196c1-c1bf-46d4-a726-a93e1e02ab80 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:49.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7353" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2142,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:49.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 29 21:30:50.082: INFO: Waiting up to 5m0s for pod "var-expansion-f4f61cfc-4e20-4c83-ae46-b8d21d4bdbf5" in namespace "var-expansion-2851" to be "success or failure" Mar 29 21:30:50.111: INFO: Pod "var-expansion-f4f61cfc-4e20-4c83-ae46-b8d21d4bdbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 29.68936ms Mar 29 21:30:52.115: INFO: Pod "var-expansion-f4f61cfc-4e20-4c83-ae46-b8d21d4bdbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033441938s Mar 29 21:30:54.119: INFO: Pod "var-expansion-f4f61cfc-4e20-4c83-ae46-b8d21d4bdbf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0378474s STEP: Saw pod success Mar 29 21:30:54.120: INFO: Pod "var-expansion-f4f61cfc-4e20-4c83-ae46-b8d21d4bdbf5" satisfied condition "success or failure" Mar 29 21:30:54.123: INFO: Trying to get logs from node jerma-worker pod var-expansion-f4f61cfc-4e20-4c83-ae46-b8d21d4bdbf5 container dapi-container: STEP: delete the pod Mar 29 21:30:54.146: INFO: Waiting for pod var-expansion-f4f61cfc-4e20-4c83-ae46-b8d21d4bdbf5 to disappear Mar 29 21:30:54.156: INFO: Pod var-expansion-f4f61cfc-4e20-4c83-ae46-b8d21d4bdbf5 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:54.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2851" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:54.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 29 21:30:58.376: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:30:58.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9906" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2195,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:30:58.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 29 21:30:58.495: INFO: Waiting up to 5m0s for pod "pod-f5adcb55-0c81-4888-b795-51bd2c760c81" in namespace "emptydir-4192" to be "success or failure" Mar 29 21:30:58.509: INFO: Pod "pod-f5adcb55-0c81-4888-b795-51bd2c760c81": Phase="Pending", Reason="", readiness=false. Elapsed: 14.186497ms Mar 29 21:31:00.512: INFO: Pod "pod-f5adcb55-0c81-4888-b795-51bd2c760c81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01782034s Mar 29 21:31:02.538: INFO: Pod "pod-f5adcb55-0c81-4888-b795-51bd2c760c81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043755069s STEP: Saw pod success Mar 29 21:31:02.538: INFO: Pod "pod-f5adcb55-0c81-4888-b795-51bd2c760c81" satisfied condition "success or failure" Mar 29 21:31:02.541: INFO: Trying to get logs from node jerma-worker2 pod pod-f5adcb55-0c81-4888-b795-51bd2c760c81 container test-container: STEP: delete the pod Mar 29 21:31:02.560: INFO: Waiting for pod pod-f5adcb55-0c81-4888-b795-51bd2c760c81 to disappear Mar 29 21:31:02.565: INFO: Pod pod-f5adcb55-0c81-4888-b795-51bd2c760c81 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:31:02.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4192" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2195,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:31:02.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:31:02.679: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8251233d-cb46-4d62-ab8a-6b6daeca9215" in namespace "downward-api-3859" to be "success or failure" Mar 29 21:31:02.685: INFO: Pod "downwardapi-volume-8251233d-cb46-4d62-ab8a-6b6daeca9215": Phase="Pending", Reason="", readiness=false. Elapsed: 5.99898ms Mar 29 21:31:04.688: INFO: Pod "downwardapi-volume-8251233d-cb46-4d62-ab8a-6b6daeca9215": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009452703s Mar 29 21:31:06.692: INFO: Pod "downwardapi-volume-8251233d-cb46-4d62-ab8a-6b6daeca9215": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013229438s STEP: Saw pod success Mar 29 21:31:06.692: INFO: Pod "downwardapi-volume-8251233d-cb46-4d62-ab8a-6b6daeca9215" satisfied condition "success or failure" Mar 29 21:31:06.695: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8251233d-cb46-4d62-ab8a-6b6daeca9215 container client-container: STEP: delete the pod Mar 29 21:31:06.710: INFO: Waiting for pod downwardapi-volume-8251233d-cb46-4d62-ab8a-6b6daeca9215 to disappear Mar 29 21:31:06.714: INFO: Pod downwardapi-volume-8251233d-cb46-4d62-ab8a-6b6daeca9215 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:31:06.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3859" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:31:06.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-b21772aa-c8ea-45d4-ad57-9188b001db7c STEP: Creating a pod to test consume configMaps Mar 29 21:31:06.811: INFO: Waiting up to 5m0s for pod "pod-configmaps-a70f199d-15bf-46d0-8f8e-86bb8a5fae03" in namespace "configmap-2433" to be "success or failure" Mar 29 21:31:06.817: INFO: Pod "pod-configmaps-a70f199d-15bf-46d0-8f8e-86bb8a5fae03": Phase="Pending", Reason="", readiness=false. Elapsed: 6.305496ms Mar 29 21:31:08.820: INFO: Pod "pod-configmaps-a70f199d-15bf-46d0-8f8e-86bb8a5fae03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009669664s Mar 29 21:31:10.824: INFO: Pod "pod-configmaps-a70f199d-15bf-46d0-8f8e-86bb8a5fae03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013746726s STEP: Saw pod success Mar 29 21:31:10.824: INFO: Pod "pod-configmaps-a70f199d-15bf-46d0-8f8e-86bb8a5fae03" satisfied condition "success or failure" Mar 29 21:31:10.828: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-a70f199d-15bf-46d0-8f8e-86bb8a5fae03 container configmap-volume-test: STEP: delete the pod Mar 29 21:31:10.848: INFO: Waiting for pod pod-configmaps-a70f199d-15bf-46d0-8f8e-86bb8a5fae03 to disappear Mar 29 21:31:10.853: INFO: Pod pod-configmaps-a70f199d-15bf-46d0-8f8e-86bb8a5fae03 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:31:10.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2433" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:31:10.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 29 21:31:10.944: INFO: Waiting up to 5m0s for pod "pod-3078ccf4-11d3-4cd6-87e1-60990bf23d6b" in namespace "emptydir-1271" to be "success or failure" Mar 29 21:31:10.970: INFO: Pod "pod-3078ccf4-11d3-4cd6-87e1-60990bf23d6b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.18881ms Mar 29 21:31:12.988: INFO: Pod "pod-3078ccf4-11d3-4cd6-87e1-60990bf23d6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043599292s Mar 29 21:31:14.992: INFO: Pod "pod-3078ccf4-11d3-4cd6-87e1-60990bf23d6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047887551s STEP: Saw pod success Mar 29 21:31:14.992: INFO: Pod "pod-3078ccf4-11d3-4cd6-87e1-60990bf23d6b" satisfied condition "success or failure" Mar 29 21:31:14.996: INFO: Trying to get logs from node jerma-worker2 pod pod-3078ccf4-11d3-4cd6-87e1-60990bf23d6b container test-container: STEP: delete the pod Mar 29 21:31:15.027: INFO: Waiting for pod pod-3078ccf4-11d3-4cd6-87e1-60990bf23d6b to disappear Mar 29 21:31:15.037: INFO: Pod pod-3078ccf4-11d3-4cd6-87e1-60990bf23d6b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:31:15.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1271" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:31:15.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 29 21:31:19.119: INFO: &Pod{ObjectMeta:{send-events-f3b24196-5ec6-44e6-aaf6-41a19e13ade8 events-8697 /api/v1/namespaces/events-8697/pods/send-events-f3b24196-5ec6-44e6-aaf6-41a19e13ade8 30c55f38-6352-4050-8d69-198b7ae8d90c 3791457 0 2020-03-29 21:31:15 +0000 UTC map[name:foo time:79479471] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z664d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z664d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z664d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:31:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:31:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:31:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:31:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.254,StartTime:2020-03-29 21:31:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:31:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8f68145a338d23720ea17bda41612d0778c8b657c4a4e72e37ef7d4f6bd1f91b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.254,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 29 21:31:21.124: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 29 21:31:23.128: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:31:23.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8697" for this suite. • [SLOW TEST:8.120 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":137,"skipped":2334,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:31:23.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8219.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8219.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8219.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8219.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8219.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8219.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 29 21:31:29.295: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:29.299: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:29.303: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:29.306: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:29.316: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:29.321: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:29.324: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:29.327: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:29.332: INFO: Lookups using dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local] Mar 29 21:31:34.337: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:34.341: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:34.344: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:34.347: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:34.354: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:34.356: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:34.359: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:34.362: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:34.367: INFO: Lookups using dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local] Mar 29 21:31:39.339: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:39.343: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:39.346: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:39.349: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:39.360: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:39.363: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:39.367: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:39.369: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:39.375: INFO: Lookups using dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local] Mar 29 21:31:44.338: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:44.340: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:44.343: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:44.346: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:44.355: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:44.359: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:44.361: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:44.364: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:44.370: INFO: Lookups using dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local] Mar 29 21:31:49.338: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:49.341: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:49.344: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:49.346: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:49.357: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:49.359: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:49.362: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:49.364: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:49.369: INFO: Lookups using dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local] Mar 29 21:31:54.337: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:54.342: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:54.345: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:54.347: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:54.355: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:54.357: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:54.360: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:54.363: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local from pod dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e: the server could not find the requested resource (get pods dns-test-9613d2eb-aa89-4936-a063-30f3799e191e) Mar 29 21:31:54.387: INFO: Lookups using dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8219.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8219.svc.cluster.local jessie_udp@dns-test-service-2.dns-8219.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8219.svc.cluster.local] Mar 29 21:31:59.375: INFO: DNS probes using dns-8219/dns-test-9613d2eb-aa89-4936-a063-30f3799e191e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:31:59.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8219" for this suite. • [SLOW TEST:36.329 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":138,"skipped":2346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:31:59.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-385 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-385 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-385 Mar 29 21:31:59.978: INFO: Found 0 stateful pods, waiting for 1 Mar 29 21:32:09.983: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 29 21:32:09.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-385 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 29 21:32:10.240: INFO: stderr: "I0329 21:32:10.122091 1360 log.go:172] (0xc00010a370) (0xc00078d400) Create stream\nI0329 21:32:10.122152 1360 log.go:172] (0xc00010a370) (0xc00078d400) Stream added, broadcasting: 1\nI0329 21:32:10.124371 1360 log.go:172] (0xc00010a370) Reply frame received for 1\nI0329 21:32:10.124413 1360 log.go:172] (0xc00010a370) (0xc000952000) Create stream\nI0329 21:32:10.124422 1360 log.go:172] (0xc00010a370) (0xc000952000) Stream added, broadcasting: 3\nI0329 21:32:10.125579 1360 log.go:172] (0xc00010a370) Reply frame received for 3\nI0329 21:32:10.125632 1360 log.go:172] (0xc00010a370) (0xc000a36000) Create stream\nI0329 21:32:10.125667 1360 log.go:172] (0xc00010a370) (0xc000a36000) Stream added, broadcasting: 5\nI0329 21:32:10.126584 1360 log.go:172] (0xc00010a370) Reply frame received for 5\nI0329 21:32:10.199282 1360 log.go:172] (0xc00010a370) Data frame received for 5\nI0329 21:32:10.199320 1360 log.go:172] (0xc000a36000) (5) Data frame handling\nI0329 21:32:10.199355 1360 log.go:172] (0xc000a36000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0329 21:32:10.232948 1360 log.go:172] (0xc00010a370) Data frame received for 5\nI0329 21:32:10.233070 1360 log.go:172] (0xc000a36000) (5) Data frame handling\nI0329 21:32:10.233255 1360 log.go:172] (0xc00010a370) Data frame received for 3\nI0329 21:32:10.233281 1360 log.go:172] (0xc000952000) (3) Data frame handling\nI0329 21:32:10.233294 1360 log.go:172] (0xc000952000) (3) Data frame sent\nI0329 21:32:10.233366 1360 log.go:172] (0xc00010a370) Data frame received for 3\nI0329 21:32:10.233403 1360 log.go:172] (0xc000952000) (3) Data frame handling\nI0329 21:32:10.235711 1360 log.go:172] (0xc00010a370) Data frame received for 1\nI0329 21:32:10.235754 1360 log.go:172] (0xc00078d400) (1) Data frame handling\nI0329 21:32:10.235809 1360 log.go:172] (0xc00078d400) (1) Data frame sent\nI0329 21:32:10.235869 1360 log.go:172] (0xc00010a370) (0xc00078d400) Stream removed, broadcasting: 1\nI0329 21:32:10.235910 1360 log.go:172] (0xc00010a370) Go away received\nI0329 21:32:10.236368 1360 log.go:172] (0xc00010a370) (0xc00078d400) Stream removed, broadcasting: 1\nI0329 21:32:10.236389 1360 log.go:172] (0xc00010a370) (0xc000952000) Stream removed, broadcasting: 3\nI0329 21:32:10.236402 1360 log.go:172] (0xc00010a370) (0xc000a36000) Stream removed, broadcasting: 5\n" Mar 29 21:32:10.240: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 29 21:32:10.240: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 29 21:32:10.245: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 29 21:32:20.250: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 29 21:32:20.250: INFO: Waiting for statefulset status.replicas updated to 0 Mar 29 21:32:20.267: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999443s Mar 29 21:32:21.276: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992658065s Mar 29 21:32:22.281: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.983872817s Mar 29 21:32:23.285: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.979073484s Mar 29 21:32:24.313: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.974637016s Mar 29 21:32:25.317: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.947217767s Mar 29 21:32:26.324: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.942735179s Mar 29 21:32:27.329: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.935573578s Mar 29 21:32:28.334: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.930548309s Mar 29 21:32:29.338: INFO: Verifying statefulset ss doesn't scale past 1 for another 926.014904ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-385 Mar 29 21:32:30.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-385 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 21:32:30.581: INFO: stderr: "I0329 21:32:30.490848 1382 log.go:172] (0xc000639130) (0xc0009da000) Create stream\nI0329 21:32:30.490918 1382 log.go:172] (0xc000639130) (0xc0009da000) Stream added, broadcasting: 1\nI0329 21:32:30.493735 1382 log.go:172] (0xc000639130) Reply frame received for 1\nI0329 21:32:30.493787 1382 log.go:172] (0xc000639130) (0xc0008da000) Create stream\nI0329 21:32:30.493803 1382 log.go:172] (0xc000639130) (0xc0008da000) Stream added, broadcasting: 3\nI0329 21:32:30.494861 1382 log.go:172] (0xc000639130) Reply frame received for 3\nI0329 21:32:30.494888 1382 log.go:172] (0xc000639130) (0xc0009da0a0) Create stream\nI0329 21:32:30.494906 1382 log.go:172] (0xc000639130) (0xc0009da0a0) Stream added, broadcasting: 5\nI0329 21:32:30.495938 1382 log.go:172] (0xc000639130) Reply frame received for 5\nI0329 21:32:30.574943 1382 log.go:172] (0xc000639130) Data frame received for 5\nI0329 21:32:30.574977 1382 log.go:172] (0xc0009da0a0) (5) Data frame handling\nI0329 21:32:30.574994 1382 log.go:172] (0xc0009da0a0) (5) Data frame sent\nI0329 21:32:30.575005 1382 log.go:172] (0xc000639130) Data frame received for 5\nI0329 21:32:30.575014 1382 log.go:172] (0xc0009da0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0329 21:32:30.575063 1382 log.go:172] (0xc000639130) Data frame received for 3\nI0329 21:32:30.575092 1382 log.go:172] (0xc0008da000) (3) Data frame handling\nI0329 21:32:30.575116 1382 log.go:172] (0xc0008da000) (3) Data frame sent\nI0329 21:32:30.575136 1382 log.go:172] (0xc000639130) Data frame received for 3\nI0329 21:32:30.575158 1382 log.go:172] (0xc0008da000) (3) Data frame handling\nI0329 21:32:30.576519 1382 log.go:172] (0xc000639130) Data frame received for 1\nI0329 21:32:30.576542 1382 log.go:172] (0xc0009da000) (1) Data frame handling\nI0329 21:32:30.576557 1382 log.go:172] (0xc0009da000) (1) Data frame sent\nI0329 21:32:30.576574 1382 log.go:172] (0xc000639130) (0xc0009da000) Stream removed, broadcasting: 1\nI0329 21:32:30.576634 1382 log.go:172] (0xc000639130) Go away received\nI0329 21:32:30.576956 1382 log.go:172] (0xc000639130) (0xc0009da000) Stream removed, broadcasting: 1\nI0329 21:32:30.576974 1382 log.go:172] (0xc000639130) (0xc0008da000) Stream removed, broadcasting: 3\nI0329 21:32:30.576994 1382 log.go:172] (0xc000639130) (0xc0009da0a0) Stream removed, broadcasting: 5\n" Mar 29 21:32:30.582: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 29 21:32:30.582: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 29 21:32:30.586: INFO: Found 1 stateful pods, waiting for 3 Mar 29 21:32:40.591: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:32:40.591: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:32:40.591: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 29 21:32:40.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-385 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 29 21:32:40.833: INFO: stderr: "I0329 21:32:40.730960 1404 log.go:172] (0xc00011ae70) (0xc000b40000) Create stream\nI0329 21:32:40.731034 1404 log.go:172] (0xc00011ae70) (0xc000b40000) Stream added, broadcasting: 1\nI0329 21:32:40.733331 1404 log.go:172] (0xc00011ae70) Reply frame received for 1\nI0329 21:32:40.733366 1404 log.go:172] (0xc00011ae70) (0xc00067bae0) Create stream\nI0329 21:32:40.733378 1404 log.go:172] (0xc00011ae70) (0xc00067bae0) Stream added, broadcasting: 3\nI0329 21:32:40.734284 1404 log.go:172] (0xc00011ae70) Reply frame received for 3\nI0329 21:32:40.734325 1404 log.go:172] (0xc00011ae70) (0xc000b400a0) Create stream\nI0329 21:32:40.734344 1404 log.go:172] (0xc00011ae70) (0xc000b400a0) Stream added, broadcasting: 5\nI0329 21:32:40.735350 1404 log.go:172] (0xc00011ae70) Reply frame received for 5\nI0329 21:32:40.827394 1404 log.go:172] (0xc00011ae70) Data frame received for 5\nI0329 21:32:40.827423 1404 log.go:172] (0xc000b400a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0329 21:32:40.827449 1404 log.go:172] (0xc00011ae70) Data frame received for 3\nI0329 21:32:40.827486 1404 log.go:172] (0xc00067bae0) (3) Data frame handling\nI0329 21:32:40.827499 1404 log.go:172] (0xc00067bae0) (3) Data frame sent\nI0329 21:32:40.827510 1404 log.go:172] (0xc00011ae70) Data frame received for 3\nI0329 21:32:40.827523 1404 log.go:172] (0xc00067bae0) (3) Data frame handling\nI0329 21:32:40.827551 1404 log.go:172] (0xc000b400a0) (5) Data frame sent\nI0329 21:32:40.827706 1404 log.go:172] (0xc00011ae70) Data frame received for 5\nI0329 21:32:40.827717 1404 log.go:172] (0xc000b400a0) (5) Data frame handling\nI0329 21:32:40.829250 1404 log.go:172] (0xc00011ae70) Data frame received for 1\nI0329 21:32:40.829279 1404 log.go:172] (0xc000b40000) (1) Data frame handling\nI0329 21:32:40.829295 1404 log.go:172] (0xc000b40000) (1) Data frame sent\nI0329 21:32:40.829308 1404 log.go:172] (0xc00011ae70) (0xc000b40000) Stream removed, broadcasting: 1\nI0329 21:32:40.829327 1404 log.go:172] (0xc00011ae70) Go away received\nI0329 21:32:40.829746 1404 log.go:172] (0xc00011ae70) (0xc000b40000) Stream removed, broadcasting: 1\nI0329 21:32:40.829768 1404 log.go:172] (0xc00011ae70) (0xc00067bae0) Stream removed, broadcasting: 3\nI0329 21:32:40.829779 1404 log.go:172] (0xc00011ae70) (0xc000b400a0) Stream removed, broadcasting: 5\n" Mar 29 21:32:40.833: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 29 21:32:40.833: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 29 21:32:40.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-385 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 29 21:32:41.109: INFO: stderr: "I0329 21:32:40.999932 1425 log.go:172] (0xc0000f4f20) (0xc000628000) Create stream\nI0329 21:32:41.000003 1425 log.go:172] (0xc0000f4f20) (0xc000628000) Stream added, broadcasting: 1\nI0329 21:32:41.003474 1425 log.go:172] (0xc0000f4f20) Reply frame received for 1\nI0329 21:32:41.003514 1425 log.go:172] (0xc0000f4f20) (0xc0006099a0) Create stream\nI0329 21:32:41.003531 1425 log.go:172] (0xc0000f4f20) (0xc0006099a0) Stream added, broadcasting: 3\nI0329 21:32:41.004881 1425 log.go:172] (0xc0000f4f20) Reply frame received for 3\nI0329 21:32:41.004927 1425 log.go:172] (0xc0000f4f20) (0xc0006280a0) Create stream\nI0329 21:32:41.004941 1425 log.go:172] (0xc0000f4f20) (0xc0006280a0) Stream added, broadcasting: 5\nI0329 21:32:41.006171 1425 log.go:172] (0xc0000f4f20) Reply frame received for 5\nI0329 21:32:41.061940 1425 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0329 21:32:41.061989 1425 log.go:172] (0xc0006280a0) (5) Data frame handling\nI0329 21:32:41.062016 1425 log.go:172] (0xc0006280a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0329 21:32:41.101528 1425 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0329 21:32:41.101562 1425 log.go:172] (0xc0006099a0) (3) Data frame handling\nI0329 21:32:41.101581 1425 log.go:172] (0xc0006099a0) (3) Data frame sent\nI0329 21:32:41.101590 1425 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0329 21:32:41.101598 1425 log.go:172] (0xc0006099a0) (3) Data frame handling\nI0329 21:32:41.101774 1425 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0329 21:32:41.101795 1425 log.go:172] (0xc0006280a0) (5) Data frame handling\nI0329 21:32:41.103970 1425 log.go:172] (0xc0000f4f20) Data frame received for 1\nI0329 21:32:41.104000 1425 log.go:172] (0xc000628000) (1) Data frame handling\nI0329 21:32:41.104014 1425 log.go:172] (0xc000628000) (1) Data frame sent\nI0329 21:32:41.104029 1425 log.go:172] (0xc0000f4f20) (0xc000628000) Stream removed, broadcasting: 1\nI0329 21:32:41.104048 1425 log.go:172] (0xc0000f4f20) Go away received\nI0329 21:32:41.104555 1425 log.go:172] (0xc0000f4f20) (0xc000628000) Stream removed, broadcasting: 1\nI0329 21:32:41.104599 1425 log.go:172] (0xc0000f4f20) (0xc0006099a0) Stream removed, broadcasting: 3\nI0329 21:32:41.104612 1425 log.go:172] (0xc0000f4f20) (0xc0006280a0) Stream removed, broadcasting: 5\n" Mar 29 21:32:41.109: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 29 21:32:41.109: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 29 21:32:41.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-385 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 29 21:32:41.348: INFO: stderr: "I0329 21:32:41.230358 1447 log.go:172] (0xc00010d290) (0xc0009d8000) Create stream\nI0329 21:32:41.230407 1447 log.go:172] (0xc00010d290) (0xc0009d8000) Stream added, broadcasting: 1\nI0329 21:32:41.232902 1447 log.go:172] (0xc00010d290) Reply frame received for 1\nI0329 21:32:41.232948 1447 log.go:172] (0xc00010d290) (0xc000a16000) Create stream\nI0329 21:32:41.232960 1447 log.go:172] (0xc00010d290) (0xc000a16000) Stream added, broadcasting: 3\nI0329 21:32:41.234005 1447 log.go:172] (0xc00010d290) Reply frame received for 3\nI0329 21:32:41.234051 1447 log.go:172] (0xc00010d290) (0xc0009d80a0) Create stream\nI0329 21:32:41.234072 1447 log.go:172] (0xc00010d290) (0xc0009d80a0) Stream added, broadcasting: 5\nI0329 21:32:41.234883 1447 log.go:172] (0xc00010d290) Reply frame received for 5\nI0329 21:32:41.307152 1447 log.go:172] (0xc00010d290) Data frame received for 5\nI0329 21:32:41.307175 1447 log.go:172] (0xc0009d80a0) (5) Data frame handling\nI0329 21:32:41.307190 1447 log.go:172] (0xc0009d80a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0329 21:32:41.340524 1447 log.go:172] (0xc00010d290) Data frame received for 3\nI0329 21:32:41.340564 1447 log.go:172] (0xc000a16000) (3) Data frame handling\nI0329 21:32:41.340578 1447 log.go:172] (0xc000a16000) (3) Data frame sent\nI0329 21:32:41.340888 1447 log.go:172] (0xc00010d290) Data frame received for 3\nI0329 21:32:41.340933 1447 log.go:172] (0xc000a16000) (3) Data frame handling\nI0329 21:32:41.341339 1447 log.go:172] (0xc00010d290) Data frame received for 5\nI0329 21:32:41.341364 1447 log.go:172] (0xc0009d80a0) (5) Data frame handling\nI0329 21:32:41.343298 1447 log.go:172] (0xc00010d290) Data frame received for 1\nI0329 21:32:41.343333 1447 log.go:172] (0xc0009d8000) (1) Data frame handling\nI0329 21:32:41.343358 1447 log.go:172] (0xc0009d8000) (1) Data frame sent\nI0329 21:32:41.343426 1447 log.go:172] (0xc00010d290) (0xc0009d8000) Stream removed, broadcasting: 1\nI0329 21:32:41.343605 1447 log.go:172] (0xc00010d290) Go away received\nI0329 21:32:41.343873 1447 log.go:172] (0xc00010d290) (0xc0009d8000) Stream removed, broadcasting: 1\nI0329 21:32:41.343902 1447 log.go:172] (0xc00010d290) (0xc000a16000) Stream removed, broadcasting: 3\nI0329 21:32:41.343915 1447 log.go:172] (0xc00010d290) (0xc0009d80a0) Stream removed, broadcasting: 5\n" Mar 29 21:32:41.348: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 29 21:32:41.348: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 29 21:32:41.348: INFO: Waiting for statefulset status.replicas updated to 0 Mar 29 21:32:41.372: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 29 21:32:51.380: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 29 21:32:51.380: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 29 21:32:51.380: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 29 21:32:51.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999679s Mar 29 21:32:52.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996649175s Mar 29 21:32:53.406: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991430239s Mar 29 21:32:54.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985738437s Mar 29 21:32:55.415: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980758345s Mar 29 21:32:56.420: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.975988195s Mar 29 21:32:57.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.971380111s Mar 29 21:32:58.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.966382868s Mar 29 21:32:59.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.961217474s Mar 29 21:33:00.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 957.245645ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-385 Mar 29 21:33:01.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-385 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 21:33:01.661: INFO: stderr: "I0329 21:33:01.575945 1466 log.go:172] (0xc00011d550) (0xc000709b80) Create stream\nI0329 21:33:01.576034 1466 log.go:172] (0xc00011d550) (0xc000709b80) Stream added, broadcasting: 1\nI0329 21:33:01.579253 1466 log.go:172] (0xc00011d550) Reply frame received for 1\nI0329 21:33:01.579307 1466 log.go:172] (0xc00011d550) (0xc000954000) Create stream\nI0329 21:33:01.579325 1466 log.go:172] (0xc00011d550) (0xc000954000) Stream added, broadcasting: 3\nI0329 21:33:01.580341 1466 log.go:172] (0xc00011d550) Reply frame received for 3\nI0329 21:33:01.580373 1466 log.go:172] (0xc00011d550) (0xc000709d60) Create stream\nI0329 21:33:01.580385 1466 log.go:172] (0xc00011d550) (0xc000709d60) Stream added, broadcasting: 5\nI0329 21:33:01.581581 1466 log.go:172] (0xc00011d550) Reply frame received for 5\nI0329 21:33:01.653219 1466 log.go:172] (0xc00011d550) Data frame received for 5\nI0329 21:33:01.653247 1466 log.go:172] (0xc000709d60) (5) Data frame handling\nI0329 21:33:01.653263 1466 log.go:172] (0xc000709d60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0329 21:33:01.653460 1466 log.go:172] (0xc00011d550) Data frame received for 3\nI0329 21:33:01.653488 1466 log.go:172] (0xc000954000) (3) Data frame handling\nI0329 21:33:01.653513 1466 log.go:172] (0xc000954000) (3) Data frame sent\nI0329 21:33:01.653526 1466 log.go:172] (0xc00011d550) Data frame received for 3\nI0329 21:33:01.653556 1466 log.go:172] (0xc000954000) (3) Data frame handling\nI0329 21:33:01.654240 1466 log.go:172] (0xc00011d550) Data frame received for 5\nI0329 21:33:01.654256 1466 log.go:172] (0xc000709d60) (5) Data frame handling\nI0329 21:33:01.655660 1466 log.go:172] (0xc00011d550) Data frame received for 1\nI0329 21:33:01.655681 1466 log.go:172] (0xc000709b80) (1) Data frame handling\nI0329 21:33:01.655707 1466 log.go:172] (0xc000709b80) (1) Data frame sent\nI0329 21:33:01.655720 1466 log.go:172] (0xc00011d550) (0xc000709b80) Stream removed, broadcasting: 1\nI0329 21:33:01.655732 1466 log.go:172] (0xc00011d550) Go away received\nI0329 21:33:01.656163 1466 log.go:172] (0xc00011d550) (0xc000709b80) Stream removed, broadcasting: 1\nI0329 21:33:01.656194 1466 log.go:172] (0xc00011d550) (0xc000954000) Stream removed, broadcasting: 3\nI0329 21:33:01.656206 1466 log.go:172] (0xc00011d550) (0xc000709d60) Stream removed, broadcasting: 5\n" Mar 29 21:33:01.661: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 29 21:33:01.661: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 29 21:33:01.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-385 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 21:33:01.879: INFO: stderr: "I0329 21:33:01.800333 1488 log.go:172] (0xc000c34fd0) (0xc000aa8640) Create stream\nI0329 21:33:01.800383 1488 log.go:172] (0xc000c34fd0) (0xc000aa8640) Stream added, broadcasting: 1\nI0329 21:33:01.805615 1488 log.go:172] (0xc000c34fd0) Reply frame received for 1\nI0329 21:33:01.805667 1488 log.go:172] (0xc000c34fd0) (0xc0006c5b80) Create stream\nI0329 21:33:01.805685 1488 log.go:172] (0xc000c34fd0) (0xc0006c5b80) Stream added, broadcasting: 3\nI0329 21:33:01.806652 1488 log.go:172] (0xc000c34fd0) Reply frame received for 3\nI0329 21:33:01.806679 1488 log.go:172] (0xc000c34fd0) (0xc0006c5c20) Create stream\nI0329 21:33:01.806689 1488 log.go:172] (0xc000c34fd0) (0xc0006c5c20) Stream added, broadcasting: 5\nI0329 21:33:01.807571 1488 log.go:172] (0xc000c34fd0) Reply frame received for 5\nI0329 21:33:01.873961 1488 log.go:172] (0xc000c34fd0) Data frame received for 5\nI0329 21:33:01.873995 1488 log.go:172] (0xc0006c5c20) (5) Data frame handling\nI0329 21:33:01.874009 1488 log.go:172] (0xc0006c5c20) (5) Data frame sent\nI0329 21:33:01.874019 1488 log.go:172] (0xc000c34fd0) Data frame received for 5\nI0329 21:33:01.874027 1488 log.go:172] (0xc0006c5c20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0329 21:33:01.874052 1488 log.go:172] (0xc000c34fd0) Data frame received for 3\nI0329 21:33:01.874061 1488 log.go:172] (0xc0006c5b80) (3) Data frame handling\nI0329 21:33:01.874073 1488 log.go:172] (0xc0006c5b80) (3) Data frame sent\nI0329 21:33:01.874086 1488 log.go:172] (0xc000c34fd0) Data frame received for 3\nI0329 21:33:01.874095 1488 log.go:172] (0xc0006c5b80) (3) Data frame handling\nI0329 21:33:01.875465 1488 log.go:172] (0xc000c34fd0) Data frame received for 1\nI0329 21:33:01.875485 1488 log.go:172] (0xc000aa8640) (1) Data frame handling\nI0329 21:33:01.875502 1488 log.go:172] (0xc000aa8640) (1) Data frame sent\nI0329 21:33:01.875517 1488 log.go:172] (0xc000c34fd0) (0xc000aa8640) Stream removed, broadcasting: 1\nI0329 21:33:01.875708 1488 log.go:172] (0xc000c34fd0) Go away received\nI0329 21:33:01.875878 1488 log.go:172] (0xc000c34fd0) (0xc000aa8640) Stream removed, broadcasting: 1\nI0329 21:33:01.875895 1488 log.go:172] (0xc000c34fd0) (0xc0006c5b80) Stream removed, broadcasting: 3\nI0329 21:33:01.875904 1488 log.go:172] (0xc000c34fd0) (0xc0006c5c20) Stream removed, broadcasting: 5\n" Mar 29 21:33:01.880: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 29 21:33:01.880: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 29 21:33:01.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-385 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 21:33:02.103: INFO: stderr: "I0329 21:33:02.019414 1509 log.go:172] (0xc00010a2c0) (0xc0006cbae0) Create stream\nI0329 21:33:02.019464 1509 log.go:172] (0xc00010a2c0) (0xc0006cbae0) Stream added, broadcasting: 1\nI0329 21:33:02.022780 1509 log.go:172] (0xc00010a2c0) Reply frame received for 1\nI0329 21:33:02.022855 1509 log.go:172] (0xc00010a2c0) (0xc0006cbb80) Create stream\nI0329 21:33:02.022874 1509 log.go:172] (0xc00010a2c0) (0xc0006cbb80) Stream added, broadcasting: 3\nI0329 21:33:02.024014 1509 log.go:172] (0xc00010a2c0) Reply frame received for 3\nI0329 21:33:02.024055 1509 log.go:172] (0xc00010a2c0) (0xc0005e46e0) Create stream\nI0329 21:33:02.024067 1509 log.go:172] (0xc00010a2c0) (0xc0005e46e0) Stream added, broadcasting: 5\nI0329 21:33:02.025356 1509 log.go:172] (0xc00010a2c0) Reply frame received for 5\nI0329 21:33:02.095966 1509 log.go:172] (0xc00010a2c0) Data frame received for 5\nI0329 21:33:02.096024 1509 log.go:172] (0xc0005e46e0) (5) Data frame handling\nI0329 21:33:02.096047 1509 log.go:172] (0xc0005e46e0) (5) Data frame sent\nI0329 21:33:02.096067 1509 log.go:172] (0xc00010a2c0) Data frame received for 5\nI0329 21:33:02.096085 1509 log.go:172] (0xc0005e46e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0329 21:33:02.096147 1509 log.go:172] (0xc00010a2c0) Data frame received for 3\nI0329 21:33:02.096182 1509 log.go:172] (0xc0006cbb80) (3) Data frame handling\nI0329 21:33:02.096225 1509 log.go:172] (0xc0006cbb80) (3) Data frame sent\nI0329 21:33:02.096245 1509 log.go:172] (0xc00010a2c0) Data frame received for 3\nI0329 21:33:02.096255 1509 log.go:172] (0xc0006cbb80) (3) Data frame handling\nI0329 21:33:02.098148 1509 log.go:172] (0xc00010a2c0) Data frame received for 1\nI0329 21:33:02.098175 1509 log.go:172] (0xc0006cbae0) (1) Data frame handling\nI0329 21:33:02.098208 1509 log.go:172] (0xc0006cbae0) (1) Data frame sent\nI0329 21:33:02.098238 1509 log.go:172] (0xc00010a2c0) (0xc0006cbae0) Stream removed, broadcasting: 1\nI0329 21:33:02.098262 1509 log.go:172] (0xc00010a2c0) Go away received\nI0329 21:33:02.098733 1509 log.go:172] (0xc00010a2c0) (0xc0006cbae0) Stream removed, broadcasting: 1\nI0329 21:33:02.098762 1509 log.go:172] (0xc00010a2c0) (0xc0006cbb80) Stream removed, broadcasting: 3\nI0329 21:33:02.098776 1509 log.go:172] (0xc00010a2c0) (0xc0005e46e0) Stream removed, broadcasting: 5\n" Mar 29 21:33:02.103: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 29 21:33:02.104: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 29 21:33:02.104: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 29 21:33:32.118: INFO: Deleting all statefulset in ns statefulset-385 Mar 29 21:33:32.121: INFO: Scaling statefulset ss to 0 Mar 29 21:33:32.131: INFO: Waiting for statefulset status.replicas updated to 0 Mar 29 21:33:32.133: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:33:32.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-385" for this suite. • [SLOW TEST:92.657 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":139,"skipped":2375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:33:32.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:33:32.241: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1cb42c4-4ece-4a24-8e9f-432758182bd6" in namespace "downward-api-7281" to be "success or failure" Mar 29 21:33:32.244: INFO: Pod "downwardapi-volume-c1cb42c4-4ece-4a24-8e9f-432758182bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.843234ms Mar 29 21:33:34.248: INFO: Pod "downwardapi-volume-c1cb42c4-4ece-4a24-8e9f-432758182bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007061204s Mar 29 21:33:36.252: INFO: Pod "downwardapi-volume-c1cb42c4-4ece-4a24-8e9f-432758182bd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011177866s STEP: Saw pod success Mar 29 21:33:36.252: INFO: Pod "downwardapi-volume-c1cb42c4-4ece-4a24-8e9f-432758182bd6" satisfied condition "success or failure" Mar 29 21:33:36.256: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c1cb42c4-4ece-4a24-8e9f-432758182bd6 container client-container: STEP: delete the pod Mar 29 21:33:36.309: INFO: Waiting for pod downwardapi-volume-c1cb42c4-4ece-4a24-8e9f-432758182bd6 to disappear Mar 29 21:33:36.323: INFO: Pod downwardapi-volume-c1cb42c4-4ece-4a24-8e9f-432758182bd6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:33:36.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7281" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:33:36.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:33:43.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4113" for this suite. • [SLOW TEST:7.090 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":141,"skipped":2424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:33:43.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 29 21:33:43.480: INFO: Waiting up to 5m0s for pod "pod-b80caedd-7b6c-4615-bff6-068f60f6eeae" in namespace "emptydir-7270" to be "success or failure" Mar 29 21:33:43.553: INFO: Pod "pod-b80caedd-7b6c-4615-bff6-068f60f6eeae": Phase="Pending", Reason="", readiness=false. Elapsed: 73.55317ms Mar 29 21:33:45.558: INFO: Pod "pod-b80caedd-7b6c-4615-bff6-068f60f6eeae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078014351s Mar 29 21:33:47.562: INFO: Pod "pod-b80caedd-7b6c-4615-bff6-068f60f6eeae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082267753s STEP: Saw pod success Mar 29 21:33:47.562: INFO: Pod "pod-b80caedd-7b6c-4615-bff6-068f60f6eeae" satisfied condition "success or failure" Mar 29 21:33:47.565: INFO: Trying to get logs from node jerma-worker2 pod pod-b80caedd-7b6c-4615-bff6-068f60f6eeae container test-container: STEP: delete the pod Mar 29 21:33:47.598: INFO: Waiting for pod pod-b80caedd-7b6c-4615-bff6-068f60f6eeae to disappear Mar 29 21:33:47.624: INFO: Pod pod-b80caedd-7b6c-4615-bff6-068f60f6eeae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:33:47.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7270" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:33:47.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 29 21:33:55.740: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 29 21:33:55.748: INFO: Pod pod-with-prestop-http-hook still exists Mar 29 21:33:57.748: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 29 21:33:57.752: INFO: Pod pod-with-prestop-http-hook still exists Mar 29 21:33:59.748: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 29 21:33:59.752: INFO: Pod pod-with-prestop-http-hook still exists Mar 29 21:34:01.748: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 29 21:34:01.752: INFO: Pod pod-with-prestop-http-hook still exists Mar 29 21:34:03.748: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 29 21:34:03.763: INFO: Pod pod-with-prestop-http-hook still exists Mar 29 21:34:05.748: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 29 21:34:05.757: INFO: Pod pod-with-prestop-http-hook still exists Mar 29 21:34:07.748: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 29 21:34:07.760: INFO: Pod pod-with-prestop-http-hook still exists Mar 29 21:34:09.748: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 29 21:34:09.752: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:34:09.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1855" for this suite. • [SLOW TEST:22.134 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2506,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:34:09.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 29 21:34:10.388: INFO: created pod pod-service-account-defaultsa Mar 29 21:34:10.388: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 29 21:34:10.396: INFO: created pod pod-service-account-mountsa Mar 29 21:34:10.396: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 29 21:34:10.402: INFO: created pod pod-service-account-nomountsa Mar 29 21:34:10.402: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 29 21:34:10.482: INFO: created pod pod-service-account-defaultsa-mountspec Mar 29 21:34:10.482: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 29 21:34:10.491: INFO: created pod pod-service-account-mountsa-mountspec Mar 29 21:34:10.491: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 29 21:34:10.550: INFO: created pod pod-service-account-nomountsa-mountspec Mar 29 21:34:10.550: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 29 21:34:10.575: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 29 21:34:10.575: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 29 21:34:10.630: INFO: created pod pod-service-account-mountsa-nomountspec Mar 29 21:34:10.630: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 29 21:34:10.664: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 29 21:34:10.664: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:34:10.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-479" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":144,"skipped":2512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:34:10.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:34:10.879: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3" in namespace "projected-5689" to be "success or failure" Mar 29 21:34:10.886: INFO: Pod "downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.908657ms Mar 29 21:34:12.890: INFO: Pod "downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010930986s Mar 29 21:34:14.892: INFO: Pod "downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01363935s Mar 29 21:34:16.925: INFO: Pod "downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046245576s Mar 29 21:34:18.991: INFO: Pod "downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111935684s Mar 29 21:34:21.147: INFO: Pod "downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3": Phase="Running", Reason="", readiness=true. Elapsed: 10.267754609s Mar 29 21:34:23.153: INFO: Pod "downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.274003098s STEP: Saw pod success Mar 29 21:34:23.153: INFO: Pod "downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3" satisfied condition "success or failure" Mar 29 21:34:23.155: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3 container client-container: STEP: delete the pod Mar 29 21:34:23.214: INFO: Waiting for pod downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3 to disappear Mar 29 21:34:23.222: INFO: Pod downwardapi-volume-a29118fe-f618-491b-8cba-fe62dff9a5d3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:34:23.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5689" for this suite. • [SLOW TEST:12.474 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2574,"failed":0} [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:34:23.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:34:27.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2817" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2574,"failed":0} ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:34:27.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:34:27.540: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:34:31.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2650" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2574,"failed":0} SS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:34:31.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4070 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4070;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4070 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4070;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4070.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4070.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4070.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4070.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4070.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4070.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4070.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4070.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4070.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4070.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4070.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 53.118.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.118.53_udp@PTR;check="$$(dig +tcp +noall +answer +search 53.118.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.118.53_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4070 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4070;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4070 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4070;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4070.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4070.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4070.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4070.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4070.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4070.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4070.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4070.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4070.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4070.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4070.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4070.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 53.118.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.118.53_udp@PTR;check="$$(dig +tcp +noall +answer +search 53.118.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.118.53_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 29 21:34:37.807: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.810: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.813: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.816: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.819: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.821: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.824: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.827: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.848: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.850: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.853: INFO: Unable to read jessie_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.855: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.858: INFO: Unable to read jessie_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.860: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.863: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.866: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:37.885: INFO: Lookups using dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4070 wheezy_tcp@dns-test-service.dns-4070 wheezy_udp@dns-test-service.dns-4070.svc wheezy_tcp@dns-test-service.dns-4070.svc wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4070 jessie_tcp@dns-test-service.dns-4070 jessie_udp@dns-test-service.dns-4070.svc jessie_tcp@dns-test-service.dns-4070.svc jessie_udp@_http._tcp.dns-test-service.dns-4070.svc jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc] Mar 29 21:34:42.894: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.897: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.900: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.902: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.904: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.906: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.908: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.910: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.929: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.931: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.934: INFO: Unable to read jessie_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.936: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.939: INFO: Unable to read jessie_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.942: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.944: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.947: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:42.984: INFO: Lookups using dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4070 wheezy_tcp@dns-test-service.dns-4070 wheezy_udp@dns-test-service.dns-4070.svc wheezy_tcp@dns-test-service.dns-4070.svc wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4070 jessie_tcp@dns-test-service.dns-4070 jessie_udp@dns-test-service.dns-4070.svc jessie_tcp@dns-test-service.dns-4070.svc jessie_udp@_http._tcp.dns-test-service.dns-4070.svc jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc] Mar 29 21:34:47.889: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.893: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.896: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.899: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.902: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.925: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.928: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.932: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.952: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.956: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.959: INFO: Unable to read jessie_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.961: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.964: INFO: Unable to read jessie_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.970: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.973: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:47.990: INFO: Lookups using dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4070 wheezy_tcp@dns-test-service.dns-4070 wheezy_udp@dns-test-service.dns-4070.svc wheezy_tcp@dns-test-service.dns-4070.svc wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4070 jessie_tcp@dns-test-service.dns-4070 jessie_udp@dns-test-service.dns-4070.svc jessie_tcp@dns-test-service.dns-4070.svc jessie_udp@_http._tcp.dns-test-service.dns-4070.svc jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc] Mar 29 21:34:52.902: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.905: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.908: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.911: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.914: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.918: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.921: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.924: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.948: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.951: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.955: INFO: Unable to read jessie_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.958: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.960: INFO: Unable to read jessie_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.985: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.988: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:52.991: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:53.011: INFO: Lookups using dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4070 wheezy_tcp@dns-test-service.dns-4070 wheezy_udp@dns-test-service.dns-4070.svc wheezy_tcp@dns-test-service.dns-4070.svc wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4070 jessie_tcp@dns-test-service.dns-4070 jessie_udp@dns-test-service.dns-4070.svc jessie_tcp@dns-test-service.dns-4070.svc jessie_udp@_http._tcp.dns-test-service.dns-4070.svc jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc] Mar 29 21:34:57.889: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.893: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.899: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.925: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.928: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.930: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.934: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.937: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.957: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.960: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.963: INFO: Unable to read jessie_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.966: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.968: INFO: Unable to read jessie_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.971: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.974: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.977: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:34:57.998: INFO: Lookups using dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4070 wheezy_tcp@dns-test-service.dns-4070 wheezy_udp@dns-test-service.dns-4070.svc wheezy_tcp@dns-test-service.dns-4070.svc wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4070 jessie_tcp@dns-test-service.dns-4070 jessie_udp@dns-test-service.dns-4070.svc jessie_tcp@dns-test-service.dns-4070.svc jessie_udp@_http._tcp.dns-test-service.dns-4070.svc jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc] Mar 29 21:35:02.889: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:02.892: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:02.895: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:02.897: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:02.900: INFO: Unable to read wheezy_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:02.902: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:02.905: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:02.907: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:03.069: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:03.072: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:03.075: INFO: Unable to read jessie_udp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:03.078: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070 from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:03.081: INFO: Unable to read jessie_udp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:03.084: INFO: Unable to read jessie_tcp@dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:03.087: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:03.089: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc from pod dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2: the server could not find the requested resource (get pods dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2) Mar 29 21:35:03.107: INFO: Lookups using dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4070 wheezy_tcp@dns-test-service.dns-4070 wheezy_udp@dns-test-service.dns-4070.svc wheezy_tcp@dns-test-service.dns-4070.svc wheezy_udp@_http._tcp.dns-test-service.dns-4070.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4070.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4070 jessie_tcp@dns-test-service.dns-4070 jessie_udp@dns-test-service.dns-4070.svc jessie_tcp@dns-test-service.dns-4070.svc jessie_udp@_http._tcp.dns-test-service.dns-4070.svc jessie_tcp@_http._tcp.dns-test-service.dns-4070.svc] Mar 29 21:35:07.966: INFO: DNS probes using dns-4070/dns-test-4d5fdd9d-f9f4-4d81-84b0-dc9707de00b2 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:35:08.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4070" for this suite. • [SLOW TEST:36.865 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":148,"skipped":2576,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:35:08.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:35:09.171: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:35:11.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114509, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114509, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114509, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114509, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:35:14.210: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:35:14.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5235-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:35:14.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9420" for this suite. STEP: Destroying namespace "webhook-9420-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.610 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":149,"skipped":2587,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:35:15.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:35:19.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5759" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2597,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:35:19.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:35:20.235: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:35:22.245: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114520, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114520, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114520, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114520, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:35:24.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114520, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114520, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114520, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114520, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:35:27.303: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:35:27.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9966" for this suite. STEP: Destroying namespace "webhook-9966-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.345 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":151,"skipped":2606,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:35:27.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 29 21:35:27.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 29 21:35:27.879: INFO: stderr: "" Mar 29 21:35:27.879: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:35:27.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1367" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":152,"skipped":2615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:35:28.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 29 21:35:28.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 29 21:35:28.437: INFO: stderr: "" Mar 29 21:35:28.437: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:35:28.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3729" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":153,"skipped":2657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:35:28.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-3f9fa061-e6d2-4832-8302-662236060c3b STEP: Creating a pod to test consume configMaps Mar 29 21:35:28.519: INFO: Waiting up to 5m0s for pod "pod-configmaps-2020cd31-b5e4-44de-8703-7f53a2630816" in namespace "configmap-5181" to be "success or failure" Mar 29 21:35:28.527: INFO: Pod "pod-configmaps-2020cd31-b5e4-44de-8703-7f53a2630816": Phase="Pending", Reason="", readiness=false. Elapsed: 8.737471ms Mar 29 21:35:30.531: INFO: Pod "pod-configmaps-2020cd31-b5e4-44de-8703-7f53a2630816": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012474701s Mar 29 21:35:32.548: INFO: Pod "pod-configmaps-2020cd31-b5e4-44de-8703-7f53a2630816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029026143s STEP: Saw pod success Mar 29 21:35:32.548: INFO: Pod "pod-configmaps-2020cd31-b5e4-44de-8703-7f53a2630816" satisfied condition "success or failure" Mar 29 21:35:32.559: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-2020cd31-b5e4-44de-8703-7f53a2630816 container configmap-volume-test: STEP: delete the pod Mar 29 21:35:32.621: INFO: Waiting for pod pod-configmaps-2020cd31-b5e4-44de-8703-7f53a2630816 to disappear Mar 29 21:35:32.631: INFO: Pod pod-configmaps-2020cd31-b5e4-44de-8703-7f53a2630816 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:35:32.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5181" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:35:32.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:35:32.729: INFO: Creating deployment "test-recreate-deployment" Mar 29 21:35:32.745: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 29 21:35:32.779: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 29 21:35:34.786: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 29 21:35:34.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114532, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114532, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114532, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114532, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:35:36.793: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 29 21:35:36.800: INFO: Updating deployment test-recreate-deployment Mar 29 21:35:36.800: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 29 21:35:37.310: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9668 /apis/apps/v1/namespaces/deployment-9668/deployments/test-recreate-deployment fa5dc7df-91d1-4bd5-b6fc-6c2875df65f9 3793060 2 2020-03-29 21:35:32 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031ea968 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-29 21:35:36 +0000 UTC,LastTransitionTime:2020-03-29 21:35:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-29 21:35:37 +0000 UTC,LastTransitionTime:2020-03-29 21:35:32 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 29 21:35:37.313: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-9668 /apis/apps/v1/namespaces/deployment-9668/replicasets/test-recreate-deployment-5f94c574ff 77d01f28-299f-40d1-ab41-ee7449b95b04 3793057 1 2020-03-29 21:35:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment fa5dc7df-91d1-4bd5-b6fc-6c2875df65f9 0xc0031eacf7 0xc0031eacf8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031ead58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 29 21:35:37.313: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 29 21:35:37.313: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-9668 /apis/apps/v1/namespaces/deployment-9668/replicasets/test-recreate-deployment-799c574856 edaf3c66-b4cc-42d9-b5f1-72e789fceda5 3793048 2 2020-03-29 21:35:32 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment fa5dc7df-91d1-4bd5-b6fc-6c2875df65f9 0xc0031eadd7 0xc0031eadd8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031eae48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 29 21:35:37.326: INFO: Pod "test-recreate-deployment-5f94c574ff-sp56b" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-sp56b test-recreate-deployment-5f94c574ff- deployment-9668 /api/v1/namespaces/deployment-9668/pods/test-recreate-deployment-5f94c574ff-sp56b 0bbdd8c8-0875-4fb1-8c8d-f7a08a94c4d3 3793061 0 2020-03-29 21:35:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 77d01f28-299f-40d1-ab41-ee7449b95b04 0xc00376a3f7 0xc00376a3f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6nn7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6nn7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6nn7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:35:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:35:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:35:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:35:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-29 21:35:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:35:37.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9668" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":155,"skipped":2716,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:35:37.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-11e42444-57af-4f34-a202-a944733a5378 STEP: Creating a pod to test consume configMaps Mar 29 21:35:37.423: INFO: Waiting up to 5m0s for pod "pod-configmaps-3eb0c65e-2dac-4040-aa4a-a08fba6f6f9c" in namespace "configmap-6574" to be "success or failure" Mar 29 21:35:37.427: INFO: Pod "pod-configmaps-3eb0c65e-2dac-4040-aa4a-a08fba6f6f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208125ms Mar 29 21:35:39.430: INFO: Pod "pod-configmaps-3eb0c65e-2dac-4040-aa4a-a08fba6f6f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007053856s Mar 29 21:35:41.434: INFO: Pod "pod-configmaps-3eb0c65e-2dac-4040-aa4a-a08fba6f6f9c": Phase="Running", Reason="", readiness=true. Elapsed: 4.010585386s Mar 29 21:35:43.440: INFO: Pod "pod-configmaps-3eb0c65e-2dac-4040-aa4a-a08fba6f6f9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01722009s STEP: Saw pod success Mar 29 21:35:43.440: INFO: Pod "pod-configmaps-3eb0c65e-2dac-4040-aa4a-a08fba6f6f9c" satisfied condition "success or failure" Mar 29 21:35:43.443: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-3eb0c65e-2dac-4040-aa4a-a08fba6f6f9c container configmap-volume-test: STEP: delete the pod Mar 29 21:35:43.494: INFO: Waiting for pod pod-configmaps-3eb0c65e-2dac-4040-aa4a-a08fba6f6f9c to disappear Mar 29 21:35:43.499: INFO: Pod pod-configmaps-3eb0c65e-2dac-4040-aa4a-a08fba6f6f9c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:35:43.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6574" for this suite. • [SLOW TEST:6.173 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2752,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:35:43.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:35:43.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-848" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":157,"skipped":2753,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:35:43.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-174 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-174 STEP: Creating statefulset with conflicting port in namespace statefulset-174 STEP: Waiting until pod test-pod will start running in namespace statefulset-174 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-174 Mar 29 21:35:47.759: INFO: Observed stateful pod in namespace: statefulset-174, name: ss-0, uid: 433e79a0-b049-46aa-bea8-af45f302b453, status phase: Pending. Waiting for statefulset controller to delete. Mar 29 21:35:48.326: INFO: Observed stateful pod in namespace: statefulset-174, name: ss-0, uid: 433e79a0-b049-46aa-bea8-af45f302b453, status phase: Failed. Waiting for statefulset controller to delete. Mar 29 21:35:48.334: INFO: Observed stateful pod in namespace: statefulset-174, name: ss-0, uid: 433e79a0-b049-46aa-bea8-af45f302b453, status phase: Failed. Waiting for statefulset controller to delete. Mar 29 21:35:48.355: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-174 STEP: Removing pod with conflicting port in namespace statefulset-174 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-174 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 29 21:35:52.431: INFO: Deleting all statefulset in ns statefulset-174 Mar 29 21:35:52.434: INFO: Scaling statefulset ss to 0 Mar 29 21:36:02.449: INFO: Waiting for statefulset status.replicas updated to 0 Mar 29 21:36:02.452: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:36:02.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-174" for this suite. • [SLOW TEST:18.880 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":158,"skipped":2753,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:36:02.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 29 21:36:02.544: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix189357922/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:36:02.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-992" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":159,"skipped":2755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:36:02.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-41faf95c-5892-4600-a0c8-67bfb560eaec in namespace container-probe-9090 Mar 29 21:36:06.719: INFO: Started pod busybox-41faf95c-5892-4600-a0c8-67bfb560eaec in namespace container-probe-9090 STEP: checking the pod's current state and verifying that restartCount is present Mar 29 21:36:06.722: INFO: Initial restart count of pod busybox-41faf95c-5892-4600-a0c8-67bfb560eaec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:40:07.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9090" for this suite. • [SLOW TEST:244.698 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2785,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:40:07.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-480 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-480 to expose endpoints map[] Mar 29 21:40:07.437: INFO: Get endpoints failed (29.491612ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 29 21:40:08.441: INFO: successfully validated that service multi-endpoint-test in namespace services-480 exposes endpoints map[] (1.033427638s elapsed) STEP: Creating pod pod1 in namespace services-480 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-480 to expose endpoints map[pod1:[100]] Mar 29 21:40:12.769: INFO: successfully validated that service multi-endpoint-test in namespace services-480 exposes endpoints map[pod1:[100]] (4.315362349s elapsed) STEP: Creating pod pod2 in namespace services-480 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-480 to expose endpoints map[pod1:[100] pod2:[101]] Mar 29 21:40:15.856: INFO: successfully validated that service multi-endpoint-test in namespace services-480 exposes endpoints map[pod1:[100] pod2:[101]] (3.084000864s elapsed) STEP: Deleting pod pod1 in namespace services-480 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-480 to expose endpoints map[pod2:[101]] Mar 29 21:40:16.887: INFO: successfully validated that service multi-endpoint-test in namespace services-480 exposes endpoints map[pod2:[101]] (1.026806415s elapsed) STEP: Deleting pod pod2 in namespace services-480 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-480 to expose endpoints map[] Mar 29 21:40:17.905: INFO: successfully validated that service multi-endpoint-test in namespace services-480 exposes endpoints map[] (1.01261765s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:40:17.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-480" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.658 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":161,"skipped":2808,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:40:17.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 29 21:40:18.018: INFO: PodSpec: initContainers in spec.initContainers Mar 29 21:41:12.242: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5f846e78-062c-48ad-aa3f-22c299eca553", GenerateName:"", Namespace:"init-container-2633", SelfLink:"/api/v1/namespaces/init-container-2633/pods/pod-init-5f846e78-062c-48ad-aa3f-22c299eca553", UID:"6abf0744-4f3c-4ddb-9816-4e2248847242", ResourceVersion:"3794371", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721114818, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"18704333"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bpx4q", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00536f480), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bpx4q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bpx4q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bpx4q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0035a2e58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b122a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0035a2ee0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0035a2f00)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0035a2f08), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0035a2f0c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114818, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114818, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114818, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114818, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.50", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.50"}}, StartTime:(*v1.Time)(0xc001e088c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008e4fc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008e5030)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://55e89c6760623fd398688122780205dea49dff492416a16c25432ded0a4602f5", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e08900), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e088e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0035a2f8f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:41:12.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2633" for this suite. • [SLOW TEST:54.272 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":162,"skipped":2829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:41:12.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 29 21:41:12.398: INFO: Waiting up to 5m0s for pod "downward-api-d52d8341-64fd-4cc7-b656-d834d681d17e" in namespace "downward-api-2853" to be "success or failure" Mar 29 21:41:12.418: INFO: Pod "downward-api-d52d8341-64fd-4cc7-b656-d834d681d17e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.534154ms Mar 29 21:41:14.422: INFO: Pod "downward-api-d52d8341-64fd-4cc7-b656-d834d681d17e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023629672s Mar 29 21:41:16.426: INFO: Pod "downward-api-d52d8341-64fd-4cc7-b656-d834d681d17e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027302579s STEP: Saw pod success Mar 29 21:41:16.426: INFO: Pod "downward-api-d52d8341-64fd-4cc7-b656-d834d681d17e" satisfied condition "success or failure" Mar 29 21:41:16.429: INFO: Trying to get logs from node jerma-worker pod downward-api-d52d8341-64fd-4cc7-b656-d834d681d17e container dapi-container: STEP: delete the pod Mar 29 21:41:16.461: INFO: Waiting for pod downward-api-d52d8341-64fd-4cc7-b656-d834d681d17e to disappear Mar 29 21:41:16.465: INFO: Pod downward-api-d52d8341-64fd-4cc7-b656-d834d681d17e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:41:16.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2853" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2859,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:41:16.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8189.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8189.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8189.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8189.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8189.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8189.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8189.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8189.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8189.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8189.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8189.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8189.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8189.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 150.164.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.164.150_udp@PTR;check="$$(dig +tcp +noall +answer +search 150.164.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.164.150_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8189.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8189.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8189.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8189.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8189.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8189.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8189.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8189.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8189.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8189.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8189.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8189.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8189.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 150.164.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.164.150_udp@PTR;check="$$(dig +tcp +noall +answer +search 150.164.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.164.150_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 29 21:41:22.708: INFO: Unable to read wheezy_udp@dns-test-service.dns-8189.svc.cluster.local from pod dns-8189/dns-test-50b3a7ba-8a36-4eff-9888-bdcc59ca7bd6: the server could not find the requested resource (get pods dns-test-50b3a7ba-8a36-4eff-9888-bdcc59ca7bd6) Mar 29 21:41:22.711: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8189.svc.cluster.local from pod dns-8189/dns-test-50b3a7ba-8a36-4eff-9888-bdcc59ca7bd6: the server could not find the requested resource (get pods dns-test-50b3a7ba-8a36-4eff-9888-bdcc59ca7bd6) Mar 29 21:41:22.744: INFO: Unable to read jessie_tcp@dns-test-service.dns-8189.svc.cluster.local from pod dns-8189/dns-test-50b3a7ba-8a36-4eff-9888-bdcc59ca7bd6: the server could not find the requested resource (get pods dns-test-50b3a7ba-8a36-4eff-9888-bdcc59ca7bd6) Mar 29 21:41:22.752: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8189.svc.cluster.local from pod dns-8189/dns-test-50b3a7ba-8a36-4eff-9888-bdcc59ca7bd6: the server could not find the requested resource (get pods dns-test-50b3a7ba-8a36-4eff-9888-bdcc59ca7bd6) Mar 29 21:41:22.764: INFO: Lookups using dns-8189/dns-test-50b3a7ba-8a36-4eff-9888-bdcc59ca7bd6 failed for: [wheezy_udp@dns-test-service.dns-8189.svc.cluster.local wheezy_tcp@dns-test-service.dns-8189.svc.cluster.local jessie_tcp@dns-test-service.dns-8189.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8189.svc.cluster.local] Mar 29 21:41:27.825: INFO: DNS probes using dns-8189/dns-test-50b3a7ba-8a36-4eff-9888-bdcc59ca7bd6 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:41:28.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8189" for this suite. • [SLOW TEST:11.844 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":164,"skipped":2868,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:41:28.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:41:32.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4907" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":165,"skipped":2871,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:41:32.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:41:32.591: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed9bf757-3901-4d29-ae2e-844add0d4350" in namespace "downward-api-282" to be "success or failure" Mar 29 21:41:32.603: INFO: Pod "downwardapi-volume-ed9bf757-3901-4d29-ae2e-844add0d4350": Phase="Pending", Reason="", readiness=false. Elapsed: 12.308019ms Mar 29 21:41:34.648: INFO: Pod "downwardapi-volume-ed9bf757-3901-4d29-ae2e-844add0d4350": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057452674s Mar 29 21:41:36.652: INFO: Pod "downwardapi-volume-ed9bf757-3901-4d29-ae2e-844add0d4350": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060881783s STEP: Saw pod success Mar 29 21:41:36.652: INFO: Pod "downwardapi-volume-ed9bf757-3901-4d29-ae2e-844add0d4350" satisfied condition "success or failure" Mar 29 21:41:36.656: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ed9bf757-3901-4d29-ae2e-844add0d4350 container client-container: STEP: delete the pod Mar 29 21:41:36.708: INFO: Waiting for pod downwardapi-volume-ed9bf757-3901-4d29-ae2e-844add0d4350 to disappear Mar 29 21:41:36.714: INFO: Pod downwardapi-volume-ed9bf757-3901-4d29-ae2e-844add0d4350 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:41:36.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-282" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2891,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:41:36.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:41:37.122: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:41:39.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114897, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114897, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114897, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114897, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:41:42.215: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:41:42.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1446" for this suite. STEP: Destroying namespace "webhook-1446-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.667 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":167,"skipped":2911,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:41:42.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:41:42.486: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3cb55169-3314-44ad-917a-2262f13d2a4c" in namespace "downward-api-1377" to be "success or failure" Mar 29 21:41:42.490: INFO: Pod "downwardapi-volume-3cb55169-3314-44ad-917a-2262f13d2a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.735052ms Mar 29 21:41:44.495: INFO: Pod "downwardapi-volume-3cb55169-3314-44ad-917a-2262f13d2a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008443297s Mar 29 21:41:46.499: INFO: Pod "downwardapi-volume-3cb55169-3314-44ad-917a-2262f13d2a4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012744s STEP: Saw pod success Mar 29 21:41:46.499: INFO: Pod "downwardapi-volume-3cb55169-3314-44ad-917a-2262f13d2a4c" satisfied condition "success or failure" Mar 29 21:41:46.502: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3cb55169-3314-44ad-917a-2262f13d2a4c container client-container: STEP: delete the pod Mar 29 21:41:46.527: INFO: Waiting for pod downwardapi-volume-3cb55169-3314-44ad-917a-2262f13d2a4c to disappear Mar 29 21:41:46.532: INFO: Pod downwardapi-volume-3cb55169-3314-44ad-917a-2262f13d2a4c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:41:46.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1377" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2925,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:41:46.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 29 21:41:46.624: INFO: Waiting up to 5m0s for pod "pod-ce01cceb-b033-4ce7-809d-7bf3debd2640" in namespace "emptydir-9891" to be "success or failure" Mar 29 21:41:46.640: INFO: Pod "pod-ce01cceb-b033-4ce7-809d-7bf3debd2640": Phase="Pending", Reason="", readiness=false. Elapsed: 15.939771ms Mar 29 21:41:48.644: INFO: Pod "pod-ce01cceb-b033-4ce7-809d-7bf3debd2640": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020058228s Mar 29 21:41:50.648: INFO: Pod "pod-ce01cceb-b033-4ce7-809d-7bf3debd2640": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024173569s STEP: Saw pod success Mar 29 21:41:50.648: INFO: Pod "pod-ce01cceb-b033-4ce7-809d-7bf3debd2640" satisfied condition "success or failure" Mar 29 21:41:50.651: INFO: Trying to get logs from node jerma-worker2 pod pod-ce01cceb-b033-4ce7-809d-7bf3debd2640 container test-container: STEP: delete the pod Mar 29 21:41:50.696: INFO: Waiting for pod pod-ce01cceb-b033-4ce7-809d-7bf3debd2640 to disappear Mar 29 21:41:50.706: INFO: Pod pod-ce01cceb-b033-4ce7-809d-7bf3debd2640 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:41:50.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9891" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2947,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:41:50.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-00c7fcdd-cc32-4fda-8944-516872b917e7 STEP: Creating a pod to test consume configMaps Mar 29 21:41:50.797: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9f585a1b-4565-4b9f-9890-1a09f86b8cb2" in namespace "projected-7494" to be "success or failure" Mar 29 21:41:50.802: INFO: Pod "pod-projected-configmaps-9f585a1b-4565-4b9f-9890-1a09f86b8cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264253ms Mar 29 21:41:52.806: INFO: Pod "pod-projected-configmaps-9f585a1b-4565-4b9f-9890-1a09f86b8cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008711531s Mar 29 21:41:54.810: INFO: Pod "pod-projected-configmaps-9f585a1b-4565-4b9f-9890-1a09f86b8cb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012679751s STEP: Saw pod success Mar 29 21:41:54.810: INFO: Pod "pod-projected-configmaps-9f585a1b-4565-4b9f-9890-1a09f86b8cb2" satisfied condition "success or failure" Mar 29 21:41:54.813: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-9f585a1b-4565-4b9f-9890-1a09f86b8cb2 container projected-configmap-volume-test: STEP: delete the pod Mar 29 21:41:54.846: INFO: Waiting for pod pod-projected-configmaps-9f585a1b-4565-4b9f-9890-1a09f86b8cb2 to disappear Mar 29 21:41:54.876: INFO: Pod pod-projected-configmaps-9f585a1b-4565-4b9f-9890-1a09f86b8cb2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:41:54.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7494" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2954,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:41:54.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:41:55.589: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:41:57.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114915, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114915, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114915, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114915, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:42:00.649: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:42:01.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6428" for this suite. STEP: Destroying namespace "webhook-6428-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.407 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":171,"skipped":2958,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:42:01.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:42:02.288: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:42:04.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114922, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114922, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114922, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721114922, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:42:07.357: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:42:07.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:42:08.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6950" for this suite. STEP: Destroying namespace "webhook-6950-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.350 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":172,"skipped":2976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:42:08.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:42:19.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4720" for this suite. • [SLOW TEST:11.192 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":173,"skipped":3021,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:42:19.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3762 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3762 I0329 21:42:20.033661 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3762, replica count: 2 I0329 21:42:23.084044 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0329 21:42:26.084366 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 29 21:42:26.084: INFO: Creating new exec pod Mar 29 21:42:31.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3762 execpodcddn7 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 29 21:42:33.688: INFO: stderr: "I0329 21:42:33.591908 1586 log.go:172] (0xc000f8a580) (0xc0005686e0) Create stream\nI0329 21:42:33.591940 1586 log.go:172] (0xc000f8a580) (0xc0005686e0) Stream added, broadcasting: 1\nI0329 21:42:33.600499 1586 log.go:172] (0xc000f8a580) Reply frame received for 1\nI0329 21:42:33.600543 1586 log.go:172] (0xc000f8a580) (0xc000192000) Create stream\nI0329 21:42:33.600552 1586 log.go:172] (0xc000f8a580) (0xc000192000) Stream added, broadcasting: 3\nI0329 21:42:33.601927 1586 log.go:172] (0xc000f8a580) Reply frame received for 3\nI0329 21:42:33.601967 1586 log.go:172] (0xc000f8a580) (0xc0001c0000) Create stream\nI0329 21:42:33.601977 1586 log.go:172] (0xc000f8a580) (0xc0001c0000) Stream added, broadcasting: 5\nI0329 21:42:33.602856 1586 log.go:172] (0xc000f8a580) Reply frame received for 5\nI0329 21:42:33.678485 1586 log.go:172] (0xc000f8a580) Data frame received for 5\nI0329 21:42:33.678515 1586 log.go:172] (0xc0001c0000) (5) Data frame handling\nI0329 21:42:33.678547 1586 log.go:172] (0xc0001c0000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0329 21:42:33.679433 1586 log.go:172] (0xc000f8a580) Data frame received for 5\nI0329 21:42:33.679465 1586 log.go:172] (0xc0001c0000) (5) Data frame handling\nI0329 21:42:33.679492 1586 log.go:172] (0xc0001c0000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0329 21:42:33.679933 1586 log.go:172] (0xc000f8a580) Data frame received for 3\nI0329 21:42:33.679960 1586 log.go:172] (0xc000192000) (3) Data frame handling\nI0329 21:42:33.680012 1586 log.go:172] (0xc000f8a580) Data frame received for 5\nI0329 21:42:33.680034 1586 log.go:172] (0xc0001c0000) (5) Data frame handling\nI0329 21:42:33.682198 1586 log.go:172] (0xc000f8a580) Data frame received for 1\nI0329 21:42:33.682241 1586 log.go:172] (0xc0005686e0) (1) Data frame handling\nI0329 21:42:33.682280 1586 log.go:172] (0xc0005686e0) (1) Data frame sent\nI0329 21:42:33.682309 1586 log.go:172] (0xc000f8a580) (0xc0005686e0) Stream removed, broadcasting: 1\nI0329 21:42:33.682339 1586 log.go:172] (0xc000f8a580) Go away received\nI0329 21:42:33.682811 1586 log.go:172] (0xc000f8a580) (0xc0005686e0) Stream removed, broadcasting: 1\nI0329 21:42:33.682832 1586 log.go:172] (0xc000f8a580) (0xc000192000) Stream removed, broadcasting: 3\nI0329 21:42:33.682843 1586 log.go:172] (0xc000f8a580) (0xc0001c0000) Stream removed, broadcasting: 5\n" Mar 29 21:42:33.688: INFO: stdout: "" Mar 29 21:42:33.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3762 execpodcddn7 -- /bin/sh -x -c nc -zv -t -w 2 10.101.117.158 80' Mar 29 21:42:33.896: INFO: stderr: "I0329 21:42:33.824714 1621 log.go:172] (0xc0001149a0) (0xc000a46000) Create stream\nI0329 21:42:33.824771 1621 log.go:172] (0xc0001149a0) (0xc000a46000) Stream added, broadcasting: 1\nI0329 21:42:33.827270 1621 log.go:172] (0xc0001149a0) Reply frame received for 1\nI0329 21:42:33.827298 1621 log.go:172] (0xc0001149a0) (0xc0006a3ae0) Create stream\nI0329 21:42:33.827305 1621 log.go:172] (0xc0001149a0) (0xc0006a3ae0) Stream added, broadcasting: 3\nI0329 21:42:33.828083 1621 log.go:172] (0xc0001149a0) Reply frame received for 3\nI0329 21:42:33.828112 1621 log.go:172] (0xc0001149a0) (0xc000a460a0) Create stream\nI0329 21:42:33.828121 1621 log.go:172] (0xc0001149a0) (0xc000a460a0) Stream added, broadcasting: 5\nI0329 21:42:33.829043 1621 log.go:172] (0xc0001149a0) Reply frame received for 5\nI0329 21:42:33.889585 1621 log.go:172] (0xc0001149a0) Data frame received for 5\nI0329 21:42:33.889635 1621 log.go:172] (0xc000a460a0) (5) Data frame handling\nI0329 21:42:33.889654 1621 log.go:172] (0xc000a460a0) (5) Data frame sent\nI0329 21:42:33.889673 1621 log.go:172] (0xc0001149a0) Data frame received for 5\nI0329 21:42:33.889684 1621 log.go:172] (0xc000a460a0) (5) Data frame handling\nI0329 21:42:33.889699 1621 log.go:172] (0xc0001149a0) Data frame received for 3\nI0329 21:42:33.889721 1621 log.go:172] (0xc0006a3ae0) (3) Data frame handling\n+ nc -zv -t -w 2 10.101.117.158 80\nConnection to 10.101.117.158 80 port [tcp/http] succeeded!\nI0329 21:42:33.891327 1621 log.go:172] (0xc0001149a0) Data frame received for 1\nI0329 21:42:33.891357 1621 log.go:172] (0xc000a46000) (1) Data frame handling\nI0329 21:42:33.891378 1621 log.go:172] (0xc000a46000) (1) Data frame sent\nI0329 21:42:33.891397 1621 log.go:172] (0xc0001149a0) (0xc000a46000) Stream removed, broadcasting: 1\nI0329 21:42:33.891429 1621 log.go:172] (0xc0001149a0) Go away received\nI0329 21:42:33.891847 1621 log.go:172] (0xc0001149a0) (0xc000a46000) Stream removed, broadcasting: 1\nI0329 21:42:33.891874 1621 log.go:172] (0xc0001149a0) (0xc0006a3ae0) Stream removed, broadcasting: 3\nI0329 21:42:33.891896 1621 log.go:172] (0xc0001149a0) (0xc000a460a0) Stream removed, broadcasting: 5\n" Mar 29 21:42:33.896: INFO: stdout: "" Mar 29 21:42:33.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3762 execpodcddn7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30323' Mar 29 21:42:34.132: INFO: stderr: "I0329 21:42:34.055074 1642 log.go:172] (0xc0000f54a0) (0xc000a84000) Create stream\nI0329 21:42:34.055128 1642 log.go:172] (0xc0000f54a0) (0xc000a84000) Stream added, broadcasting: 1\nI0329 21:42:34.057785 1642 log.go:172] (0xc0000f54a0) Reply frame received for 1\nI0329 21:42:34.057839 1642 log.go:172] (0xc0000f54a0) (0xc000a840a0) Create stream\nI0329 21:42:34.057856 1642 log.go:172] (0xc0000f54a0) (0xc000a840a0) Stream added, broadcasting: 3\nI0329 21:42:34.058819 1642 log.go:172] (0xc0000f54a0) Reply frame received for 3\nI0329 21:42:34.058856 1642 log.go:172] (0xc0000f54a0) (0xc0008e0000) Create stream\nI0329 21:42:34.058865 1642 log.go:172] (0xc0000f54a0) (0xc0008e0000) Stream added, broadcasting: 5\nI0329 21:42:34.059633 1642 log.go:172] (0xc0000f54a0) Reply frame received for 5\nI0329 21:42:34.125749 1642 log.go:172] (0xc0000f54a0) Data frame received for 5\nI0329 21:42:34.125780 1642 log.go:172] (0xc0008e0000) (5) Data frame handling\nI0329 21:42:34.125802 1642 log.go:172] (0xc0008e0000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30323\nConnection to 172.17.0.10 30323 port [tcp/30323] succeeded!\nI0329 21:42:34.126357 1642 log.go:172] (0xc0000f54a0) Data frame received for 5\nI0329 21:42:34.126389 1642 log.go:172] (0xc0008e0000) (5) Data frame handling\nI0329 21:42:34.126411 1642 log.go:172] (0xc0000f54a0) Data frame received for 3\nI0329 21:42:34.126423 1642 log.go:172] (0xc000a840a0) (3) Data frame handling\nI0329 21:42:34.127893 1642 log.go:172] (0xc0000f54a0) Data frame received for 1\nI0329 21:42:34.127916 1642 log.go:172] (0xc000a84000) (1) Data frame handling\nI0329 21:42:34.127935 1642 log.go:172] (0xc000a84000) (1) Data frame sent\nI0329 21:42:34.127949 1642 log.go:172] (0xc0000f54a0) (0xc000a84000) Stream removed, broadcasting: 1\nI0329 21:42:34.128029 1642 log.go:172] (0xc0000f54a0) Go away received\nI0329 21:42:34.128369 1642 log.go:172] (0xc0000f54a0) (0xc000a84000) Stream removed, broadcasting: 1\nI0329 21:42:34.128392 1642 log.go:172] (0xc0000f54a0) (0xc000a840a0) Stream removed, broadcasting: 3\nI0329 21:42:34.128404 1642 log.go:172] (0xc0000f54a0) (0xc0008e0000) Stream removed, broadcasting: 5\n" Mar 29 21:42:34.132: INFO: stdout: "" Mar 29 21:42:34.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3762 execpodcddn7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30323' Mar 29 21:42:34.332: INFO: stderr: "I0329 21:42:34.255855 1665 log.go:172] (0xc0009ae9a0) (0xc00064fea0) Create stream\nI0329 21:42:34.255927 1665 log.go:172] (0xc0009ae9a0) (0xc00064fea0) Stream added, broadcasting: 1\nI0329 21:42:34.258904 1665 log.go:172] (0xc0009ae9a0) Reply frame received for 1\nI0329 21:42:34.258966 1665 log.go:172] (0xc0009ae9a0) (0xc0005b2780) Create stream\nI0329 21:42:34.258994 1665 log.go:172] (0xc0009ae9a0) (0xc0005b2780) Stream added, broadcasting: 3\nI0329 21:42:34.260067 1665 log.go:172] (0xc0009ae9a0) Reply frame received for 3\nI0329 21:42:34.260100 1665 log.go:172] (0xc0009ae9a0) (0xc00064ff40) Create stream\nI0329 21:42:34.260115 1665 log.go:172] (0xc0009ae9a0) (0xc00064ff40) Stream added, broadcasting: 5\nI0329 21:42:34.261249 1665 log.go:172] (0xc0009ae9a0) Reply frame received for 5\nI0329 21:42:34.324357 1665 log.go:172] (0xc0009ae9a0) Data frame received for 5\nI0329 21:42:34.324381 1665 log.go:172] (0xc00064ff40) (5) Data frame handling\nI0329 21:42:34.324389 1665 log.go:172] (0xc00064ff40) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30323\nI0329 21:42:34.325723 1665 log.go:172] (0xc0009ae9a0) Data frame received for 5\nI0329 21:42:34.325754 1665 log.go:172] (0xc00064ff40) (5) Data frame handling\nI0329 21:42:34.325773 1665 log.go:172] (0xc00064ff40) (5) Data frame sent\nConnection to 172.17.0.8 30323 port [tcp/30323] succeeded!\nI0329 21:42:34.326038 1665 log.go:172] (0xc0009ae9a0) Data frame received for 5\nI0329 21:42:34.326068 1665 log.go:172] (0xc00064ff40) (5) Data frame handling\nI0329 21:42:34.326107 1665 log.go:172] (0xc0009ae9a0) Data frame received for 3\nI0329 21:42:34.326119 1665 log.go:172] (0xc0005b2780) (3) Data frame handling\nI0329 21:42:34.327738 1665 log.go:172] (0xc0009ae9a0) Data frame received for 1\nI0329 21:42:34.327769 1665 log.go:172] (0xc00064fea0) (1) Data frame handling\nI0329 21:42:34.327783 1665 log.go:172] (0xc00064fea0) (1) Data frame sent\nI0329 21:42:34.327797 1665 log.go:172] (0xc0009ae9a0) (0xc00064fea0) Stream removed, broadcasting: 1\nI0329 21:42:34.327818 1665 log.go:172] (0xc0009ae9a0) Go away received\nI0329 21:42:34.328367 1665 log.go:172] (0xc0009ae9a0) (0xc00064fea0) Stream removed, broadcasting: 1\nI0329 21:42:34.328394 1665 log.go:172] (0xc0009ae9a0) (0xc0005b2780) Stream removed, broadcasting: 3\nI0329 21:42:34.328405 1665 log.go:172] (0xc0009ae9a0) (0xc00064ff40) Stream removed, broadcasting: 5\n" Mar 29 21:42:34.332: INFO: stdout: "" Mar 29 21:42:34.332: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:42:34.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3762" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.591 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":174,"skipped":3038,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:42:34.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:42:34.495: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-69a8184f-6991-43b6-8ba1-e76991173ba3" in namespace "security-context-test-325" to be "success or failure" Mar 29 21:42:34.499: INFO: Pod "busybox-readonly-false-69a8184f-6991-43b6-8ba1-e76991173ba3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.335602ms Mar 29 21:42:36.502: INFO: Pod "busybox-readonly-false-69a8184f-6991-43b6-8ba1-e76991173ba3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006391399s Mar 29 21:42:38.505: INFO: Pod "busybox-readonly-false-69a8184f-6991-43b6-8ba1-e76991173ba3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009866209s Mar 29 21:42:38.505: INFO: Pod "busybox-readonly-false-69a8184f-6991-43b6-8ba1-e76991173ba3" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:42:38.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-325" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":3047,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:42:38.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 29 21:42:38.609: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 29 21:42:48.952: INFO: >>> kubeConfig: /root/.kube/config Mar 29 21:42:50.845: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:43:01.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4408" for this suite. • [SLOW TEST:22.748 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":176,"skipped":3070,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:43:01.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-18fdd8f8-0786-48c6-b79a-78295a2d369e [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:43:01.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2544" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":177,"skipped":3077,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:43:01.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 29 21:43:01.449: INFO: Waiting up to 5m0s for pod "client-containers-0311f349-f868-4b82-8130-b7061ca867cb" in namespace "containers-3290" to be "success or failure" Mar 29 21:43:01.464: INFO: Pod "client-containers-0311f349-f868-4b82-8130-b7061ca867cb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.430766ms Mar 29 21:43:03.467: INFO: Pod "client-containers-0311f349-f868-4b82-8130-b7061ca867cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017878973s Mar 29 21:43:05.471: INFO: Pod "client-containers-0311f349-f868-4b82-8130-b7061ca867cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021500548s STEP: Saw pod success Mar 29 21:43:05.471: INFO: Pod "client-containers-0311f349-f868-4b82-8130-b7061ca867cb" satisfied condition "success or failure" Mar 29 21:43:05.473: INFO: Trying to get logs from node jerma-worker2 pod client-containers-0311f349-f868-4b82-8130-b7061ca867cb container test-container: STEP: delete the pod Mar 29 21:43:05.489: INFO: Waiting for pod client-containers-0311f349-f868-4b82-8130-b7061ca867cb to disappear Mar 29 21:43:05.493: INFO: Pod client-containers-0311f349-f868-4b82-8130-b7061ca867cb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:43:05.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3290" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":3088,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:43:05.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 29 21:43:05.621: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5072 /api/v1/namespaces/watch-5072/configmaps/e2e-watch-test-watch-closed 741b4af1-c2e1-4ca5-a76f-52343b35a91d 3795316 0 2020-03-29 21:43:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 29 21:43:05.621: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5072 /api/v1/namespaces/watch-5072/configmaps/e2e-watch-test-watch-closed 741b4af1-c2e1-4ca5-a76f-52343b35a91d 3795317 0 2020-03-29 21:43:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 29 21:43:05.658: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5072 /api/v1/namespaces/watch-5072/configmaps/e2e-watch-test-watch-closed 741b4af1-c2e1-4ca5-a76f-52343b35a91d 3795318 0 2020-03-29 21:43:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 29 21:43:05.658: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5072 /api/v1/namespaces/watch-5072/configmaps/e2e-watch-test-watch-closed 741b4af1-c2e1-4ca5-a76f-52343b35a91d 3795319 0 2020-03-29 21:43:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:43:05.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5072" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":179,"skipped":3093,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:43:05.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-faf3dfc3-a403-475e-95cb-7e748ef0b659 STEP: Creating secret with name s-test-opt-upd-5f2a4ee8-8670-4d5a-8f91-8f7550ee7620 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-faf3dfc3-a403-475e-95cb-7e748ef0b659 STEP: Updating secret s-test-opt-upd-5f2a4ee8-8670-4d5a-8f91-8f7550ee7620 STEP: Creating secret with name s-test-opt-create-5feea59d-b4ac-47eb-bba6-907c3eb74516 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:44:34.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7258" for this suite. • [SLOW TEST:88.624 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3103,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:44:34.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:44:34.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3faac1cf-525b-4879-9c8e-034bf4a3a13a" in namespace "downward-api-6369" to be "success or failure" Mar 29 21:44:34.374: INFO: Pod "downwardapi-volume-3faac1cf-525b-4879-9c8e-034bf4a3a13a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.296848ms Mar 29 21:44:36.411: INFO: Pod "downwardapi-volume-3faac1cf-525b-4879-9c8e-034bf4a3a13a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056000372s Mar 29 21:44:38.416: INFO: Pod "downwardapi-volume-3faac1cf-525b-4879-9c8e-034bf4a3a13a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060534736s STEP: Saw pod success Mar 29 21:44:38.416: INFO: Pod "downwardapi-volume-3faac1cf-525b-4879-9c8e-034bf4a3a13a" satisfied condition "success or failure" Mar 29 21:44:38.419: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3faac1cf-525b-4879-9c8e-034bf4a3a13a container client-container: STEP: delete the pod Mar 29 21:44:38.473: INFO: Waiting for pod downwardapi-volume-3faac1cf-525b-4879-9c8e-034bf4a3a13a to disappear Mar 29 21:44:38.495: INFO: Pod downwardapi-volume-3faac1cf-525b-4879-9c8e-034bf4a3a13a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:44:38.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6369" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3111,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:44:38.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 29 21:44:38.551: INFO: Waiting up to 5m0s for pod "pod-f54782c0-b153-4e12-a2f9-c2779aa2c2de" in namespace "emptydir-5954" to be "success or failure" Mar 29 21:44:38.555: INFO: Pod "pod-f54782c0-b153-4e12-a2f9-c2779aa2c2de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.732416ms Mar 29 21:44:40.559: INFO: Pod "pod-f54782c0-b153-4e12-a2f9-c2779aa2c2de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007611321s Mar 29 21:44:42.563: INFO: Pod "pod-f54782c0-b153-4e12-a2f9-c2779aa2c2de": Phase="Running", Reason="", readiness=true. Elapsed: 4.01169874s Mar 29 21:44:44.567: INFO: Pod "pod-f54782c0-b153-4e12-a2f9-c2779aa2c2de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015935755s STEP: Saw pod success Mar 29 21:44:44.567: INFO: Pod "pod-f54782c0-b153-4e12-a2f9-c2779aa2c2de" satisfied condition "success or failure" Mar 29 21:44:44.570: INFO: Trying to get logs from node jerma-worker2 pod pod-f54782c0-b153-4e12-a2f9-c2779aa2c2de container test-container: STEP: delete the pod Mar 29 21:44:44.587: INFO: Waiting for pod pod-f54782c0-b153-4e12-a2f9-c2779aa2c2de to disappear Mar 29 21:44:44.591: INFO: Pod pod-f54782c0-b153-4e12-a2f9-c2779aa2c2de no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:44:44.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5954" for this suite. • [SLOW TEST:6.096 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3128,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:44:44.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:44:44.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d387f6b-db09-47f0-8df3-23df7171b72a" in namespace "projected-1975" to be "success or failure" Mar 29 21:44:44.676: INFO: Pod "downwardapi-volume-9d387f6b-db09-47f0-8df3-23df7171b72a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.436494ms Mar 29 21:44:46.679: INFO: Pod "downwardapi-volume-9d387f6b-db09-47f0-8df3-23df7171b72a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019986566s Mar 29 21:44:48.683: INFO: Pod "downwardapi-volume-9d387f6b-db09-47f0-8df3-23df7171b72a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024230337s STEP: Saw pod success Mar 29 21:44:48.683: INFO: Pod "downwardapi-volume-9d387f6b-db09-47f0-8df3-23df7171b72a" satisfied condition "success or failure" Mar 29 21:44:48.687: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9d387f6b-db09-47f0-8df3-23df7171b72a container client-container: STEP: delete the pod Mar 29 21:44:48.702: INFO: Waiting for pod downwardapi-volume-9d387f6b-db09-47f0-8df3-23df7171b72a to disappear Mar 29 21:44:48.713: INFO: Pod downwardapi-volume-9d387f6b-db09-47f0-8df3-23df7171b72a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:44:48.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1975" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3132,"failed":0} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:44:48.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 29 21:44:53.298: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1136 pod-service-account-bd91ccb5-8b3b-4303-9f81-fbaab0cde56e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 29 21:44:53.540: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1136 pod-service-account-bd91ccb5-8b3b-4303-9f81-fbaab0cde56e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 29 21:44:53.762: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1136 pod-service-account-bd91ccb5-8b3b-4303-9f81-fbaab0cde56e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:44:53.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1136" for this suite. • [SLOW TEST:5.219 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":184,"skipped":3141,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:44:53.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-26d4aa25-62aa-4ac0-9422-ee431b07f15e STEP: Creating a pod to test consume configMaps Mar 29 21:44:54.018: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e09ee95-558c-444c-be0a-21d104db1885" in namespace "projected-1374" to be "success or failure" Mar 29 21:44:54.058: INFO: Pod "pod-projected-configmaps-6e09ee95-558c-444c-be0a-21d104db1885": Phase="Pending", Reason="", readiness=false. Elapsed: 40.26488ms Mar 29 21:44:56.076: INFO: Pod "pod-projected-configmaps-6e09ee95-558c-444c-be0a-21d104db1885": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058297888s Mar 29 21:44:58.080: INFO: Pod "pod-projected-configmaps-6e09ee95-558c-444c-be0a-21d104db1885": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061855144s STEP: Saw pod success Mar 29 21:44:58.080: INFO: Pod "pod-projected-configmaps-6e09ee95-558c-444c-be0a-21d104db1885" satisfied condition "success or failure" Mar 29 21:44:58.082: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-6e09ee95-558c-444c-be0a-21d104db1885 container projected-configmap-volume-test: STEP: delete the pod Mar 29 21:44:58.102: INFO: Waiting for pod pod-projected-configmaps-6e09ee95-558c-444c-be0a-21d104db1885 to disappear Mar 29 21:44:58.106: INFO: Pod pod-projected-configmaps-6e09ee95-558c-444c-be0a-21d104db1885 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:44:58.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1374" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3160,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:44:58.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:45:02.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1656" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3167,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:45:02.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:46:02.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3680" for this suite. • [SLOW TEST:60.085 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3188,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:46:02.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-4f318aa0-fea4-4161-992f-1d646a0ab2b9 STEP: Creating a pod to test consume secrets Mar 29 21:46:02.446: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-196bc329-7d0a-463f-859c-54e7dbefa806" in namespace "projected-2061" to be "success or failure" Mar 29 21:46:02.457: INFO: Pod "pod-projected-secrets-196bc329-7d0a-463f-859c-54e7dbefa806": Phase="Pending", Reason="", readiness=false. Elapsed: 11.084844ms Mar 29 21:46:04.466: INFO: Pod "pod-projected-secrets-196bc329-7d0a-463f-859c-54e7dbefa806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020483856s Mar 29 21:46:06.478: INFO: Pod "pod-projected-secrets-196bc329-7d0a-463f-859c-54e7dbefa806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03250878s STEP: Saw pod success Mar 29 21:46:06.478: INFO: Pod "pod-projected-secrets-196bc329-7d0a-463f-859c-54e7dbefa806" satisfied condition "success or failure" Mar 29 21:46:06.482: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-196bc329-7d0a-463f-859c-54e7dbefa806 container projected-secret-volume-test: STEP: delete the pod Mar 29 21:46:06.500: INFO: Waiting for pod pod-projected-secrets-196bc329-7d0a-463f-859c-54e7dbefa806 to disappear Mar 29 21:46:06.504: INFO: Pod pod-projected-secrets-196bc329-7d0a-463f-859c-54e7dbefa806 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:46:06.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2061" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3199,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:46:06.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 21:46:06.967: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 21:46:08.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721115167, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721115167, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721115167, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721115166, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 21:46:12.008: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:46:12.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6618" for this suite. STEP: Destroying namespace "webhook-6618-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.667 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":189,"skipped":3202,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:46:12.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:46:12.268: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf22659f-11c1-4cb1-85b1-e71d6d9482a4" in namespace "projected-4683" to be "success or failure" Mar 29 21:46:12.284: INFO: Pod "downwardapi-volume-cf22659f-11c1-4cb1-85b1-e71d6d9482a4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.999635ms Mar 29 21:46:14.289: INFO: Pod "downwardapi-volume-cf22659f-11c1-4cb1-85b1-e71d6d9482a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020290673s Mar 29 21:46:16.293: INFO: Pod "downwardapi-volume-cf22659f-11c1-4cb1-85b1-e71d6d9482a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024569861s STEP: Saw pod success Mar 29 21:46:16.293: INFO: Pod "downwardapi-volume-cf22659f-11c1-4cb1-85b1-e71d6d9482a4" satisfied condition "success or failure" Mar 29 21:46:16.296: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-cf22659f-11c1-4cb1-85b1-e71d6d9482a4 container client-container: STEP: delete the pod Mar 29 21:46:16.329: INFO: Waiting for pod downwardapi-volume-cf22659f-11c1-4cb1-85b1-e71d6d9482a4 to disappear Mar 29 21:46:16.342: INFO: Pod downwardapi-volume-cf22659f-11c1-4cb1-85b1-e71d6d9482a4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:46:16.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4683" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3213,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:46:16.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:46:16.413: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 29 21:46:19.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3285 create -f -' Mar 29 21:46:19.793: INFO: stderr: "" Mar 29 21:46:19.793: INFO: stdout: "e2e-test-crd-publish-openapi-9752-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 29 21:46:19.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3285 delete e2e-test-crd-publish-openapi-9752-crds test-cr' Mar 29 21:46:19.897: INFO: stderr: "" Mar 29 21:46:19.897: INFO: stdout: "e2e-test-crd-publish-openapi-9752-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 29 21:46:19.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3285 apply -f -' Mar 29 21:46:20.167: INFO: stderr: "" Mar 29 21:46:20.167: INFO: stdout: "e2e-test-crd-publish-openapi-9752-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 29 21:46:20.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3285 delete e2e-test-crd-publish-openapi-9752-crds test-cr' Mar 29 21:46:20.274: INFO: stderr: "" Mar 29 21:46:20.274: INFO: stdout: "e2e-test-crd-publish-openapi-9752-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 29 21:46:20.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9752-crds' Mar 29 21:46:20.504: INFO: stderr: "" Mar 29 21:46:20.504: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9752-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:46:22.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3285" for this suite. • [SLOW TEST:6.046 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":191,"skipped":3219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:46:22.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 29 21:46:22.482: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 29 21:46:27.484: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:46:27.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-722" for this suite. • [SLOW TEST:5.195 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":192,"skipped":3243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:46:27.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:46:27.663: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:46:28.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8430" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":193,"skipped":3270,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:46:28.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1644.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1644.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1644.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1644.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 29 21:46:34.441: INFO: DNS probes using dns-1644/dns-test-e10fcd67-847a-41a1-bc24-06a10bfc1d1b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:46:34.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1644" for this suite. • [SLOW TEST:6.210 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":194,"skipped":3282,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:46:34.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 29 21:46:34.703: INFO: Waiting up to 5m0s for pod "pod-de856911-d01d-4646-bfbf-d8e088c0ada2" in namespace "emptydir-604" to be "success or failure" Mar 29 21:46:34.788: INFO: Pod "pod-de856911-d01d-4646-bfbf-d8e088c0ada2": Phase="Pending", Reason="", readiness=false. Elapsed: 84.534427ms Mar 29 21:46:36.791: INFO: Pod "pod-de856911-d01d-4646-bfbf-d8e088c0ada2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087765527s Mar 29 21:46:38.795: INFO: Pod "pod-de856911-d01d-4646-bfbf-d8e088c0ada2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092190544s STEP: Saw pod success Mar 29 21:46:38.795: INFO: Pod "pod-de856911-d01d-4646-bfbf-d8e088c0ada2" satisfied condition "success or failure" Mar 29 21:46:38.799: INFO: Trying to get logs from node jerma-worker2 pod pod-de856911-d01d-4646-bfbf-d8e088c0ada2 container test-container: STEP: delete the pod Mar 29 21:46:38.847: INFO: Waiting for pod pod-de856911-d01d-4646-bfbf-d8e088c0ada2 to disappear Mar 29 21:46:38.868: INFO: Pod pod-de856911-d01d-4646-bfbf-d8e088c0ada2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:46:38.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-604" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:46:38.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:46:55.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1330" for this suite. • [SLOW TEST:17.102 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":196,"skipped":3394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:46:55.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:46:56.042: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2b978084-fb80-4cd1-abfc-741558aba1b5" in namespace "downward-api-2600" to be "success or failure" Mar 29 21:46:56.046: INFO: Pod "downwardapi-volume-2b978084-fb80-4cd1-abfc-741558aba1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.909763ms Mar 29 21:46:58.050: INFO: Pod "downwardapi-volume-2b978084-fb80-4cd1-abfc-741558aba1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007959302s Mar 29 21:47:00.054: INFO: Pod "downwardapi-volume-2b978084-fb80-4cd1-abfc-741558aba1b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012255879s STEP: Saw pod success Mar 29 21:47:00.054: INFO: Pod "downwardapi-volume-2b978084-fb80-4cd1-abfc-741558aba1b5" satisfied condition "success or failure" Mar 29 21:47:00.057: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2b978084-fb80-4cd1-abfc-741558aba1b5 container client-container: STEP: delete the pod Mar 29 21:47:00.090: INFO: Waiting for pod downwardapi-volume-2b978084-fb80-4cd1-abfc-741558aba1b5 to disappear Mar 29 21:47:00.098: INFO: Pod downwardapi-volume-2b978084-fb80-4cd1-abfc-741558aba1b5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:47:00.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2600" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3453,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:47:00.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9046 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 29 21:47:00.189: INFO: Found 0 stateful pods, waiting for 3 Mar 29 21:47:10.194: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:47:10.194: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:47:10.194: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Mar 29 21:47:20.204: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:47:20.204: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:47:20.204: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:47:20.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9046 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 29 21:47:20.472: INFO: stderr: "I0329 21:47:20.342581 1861 log.go:172] (0xc000107340) (0xc0006b7ae0) Create stream\nI0329 21:47:20.342644 1861 log.go:172] (0xc000107340) (0xc0006b7ae0) Stream added, broadcasting: 1\nI0329 21:47:20.345357 1861 log.go:172] (0xc000107340) Reply frame received for 1\nI0329 21:47:20.345393 1861 log.go:172] (0xc000107340) (0xc000920000) Create stream\nI0329 21:47:20.345405 1861 log.go:172] (0xc000107340) (0xc000920000) Stream added, broadcasting: 3\nI0329 21:47:20.346226 1861 log.go:172] (0xc000107340) Reply frame received for 3\nI0329 21:47:20.346265 1861 log.go:172] (0xc000107340) (0xc000024000) Create stream\nI0329 21:47:20.346279 1861 log.go:172] (0xc000107340) (0xc000024000) Stream added, broadcasting: 5\nI0329 21:47:20.347126 1861 log.go:172] (0xc000107340) Reply frame received for 5\nI0329 21:47:20.437274 1861 log.go:172] (0xc000107340) Data frame received for 5\nI0329 21:47:20.437336 1861 log.go:172] (0xc000024000) (5) Data frame handling\nI0329 21:47:20.437372 1861 log.go:172] (0xc000024000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0329 21:47:20.466712 1861 log.go:172] (0xc000107340) Data frame received for 3\nI0329 21:47:20.466736 1861 log.go:172] (0xc000920000) (3) Data frame handling\nI0329 21:47:20.466754 1861 log.go:172] (0xc000920000) (3) Data frame sent\nI0329 21:47:20.467117 1861 log.go:172] (0xc000107340) Data frame received for 5\nI0329 21:47:20.467131 1861 log.go:172] (0xc000024000) (5) Data frame handling\nI0329 21:47:20.467521 1861 log.go:172] (0xc000107340) Data frame received for 3\nI0329 21:47:20.467533 1861 log.go:172] (0xc000920000) (3) Data frame handling\nI0329 21:47:20.468936 1861 log.go:172] (0xc000107340) Data frame received for 1\nI0329 21:47:20.468947 1861 log.go:172] (0xc0006b7ae0) (1) Data frame handling\nI0329 21:47:20.468953 1861 log.go:172] (0xc0006b7ae0) (1) Data frame sent\nI0329 21:47:20.468960 1861 log.go:172] (0xc000107340) (0xc0006b7ae0) Stream removed, broadcasting: 1\nI0329 21:47:20.469007 1861 log.go:172] (0xc000107340) Go away received\nI0329 21:47:20.469286 1861 log.go:172] (0xc000107340) (0xc0006b7ae0) Stream removed, broadcasting: 1\nI0329 21:47:20.469299 1861 log.go:172] (0xc000107340) (0xc000920000) Stream removed, broadcasting: 3\nI0329 21:47:20.469305 1861 log.go:172] (0xc000107340) (0xc000024000) Stream removed, broadcasting: 5\n" Mar 29 21:47:20.472: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 29 21:47:20.472: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 29 21:47:30.504: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 29 21:47:40.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9046 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 21:47:40.776: INFO: stderr: "I0329 21:47:40.679486 1882 log.go:172] (0xc000118f20) (0xc00068c140) Create stream\nI0329 21:47:40.679551 1882 log.go:172] (0xc000118f20) (0xc00068c140) Stream added, broadcasting: 1\nI0329 21:47:40.682428 1882 log.go:172] (0xc000118f20) Reply frame received for 1\nI0329 21:47:40.682476 1882 log.go:172] (0xc000118f20) (0xc0007be000) Create stream\nI0329 21:47:40.682490 1882 log.go:172] (0xc000118f20) (0xc0007be000) Stream added, broadcasting: 3\nI0329 21:47:40.683533 1882 log.go:172] (0xc000118f20) Reply frame received for 3\nI0329 21:47:40.683593 1882 log.go:172] (0xc000118f20) (0xc00068c1e0) Create stream\nI0329 21:47:40.683617 1882 log.go:172] (0xc000118f20) (0xc00068c1e0) Stream added, broadcasting: 5\nI0329 21:47:40.684810 1882 log.go:172] (0xc000118f20) Reply frame received for 5\nI0329 21:47:40.768531 1882 log.go:172] (0xc000118f20) Data frame received for 5\nI0329 21:47:40.768588 1882 log.go:172] (0xc00068c1e0) (5) Data frame handling\nI0329 21:47:40.768609 1882 log.go:172] (0xc00068c1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0329 21:47:40.768633 1882 log.go:172] (0xc000118f20) Data frame received for 3\nI0329 21:47:40.768644 1882 log.go:172] (0xc0007be000) (3) Data frame handling\nI0329 21:47:40.768673 1882 log.go:172] (0xc0007be000) (3) Data frame sent\nI0329 21:47:40.768693 1882 log.go:172] (0xc000118f20) Data frame received for 3\nI0329 21:47:40.768705 1882 log.go:172] (0xc0007be000) (3) Data frame handling\nI0329 21:47:40.768735 1882 log.go:172] (0xc000118f20) Data frame received for 5\nI0329 21:47:40.768758 1882 log.go:172] (0xc00068c1e0) (5) Data frame handling\nI0329 21:47:40.770542 1882 log.go:172] (0xc000118f20) Data frame received for 1\nI0329 21:47:40.770575 1882 log.go:172] (0xc00068c140) (1) Data frame handling\nI0329 21:47:40.770601 1882 log.go:172] (0xc00068c140) (1) Data frame sent\nI0329 21:47:40.770618 1882 log.go:172] (0xc000118f20) (0xc00068c140) Stream removed, broadcasting: 1\nI0329 21:47:40.770638 1882 log.go:172] (0xc000118f20) Go away received\nI0329 21:47:40.771165 1882 log.go:172] (0xc000118f20) (0xc00068c140) Stream removed, broadcasting: 1\nI0329 21:47:40.771199 1882 log.go:172] (0xc000118f20) (0xc0007be000) Stream removed, broadcasting: 3\nI0329 21:47:40.771213 1882 log.go:172] (0xc000118f20) (0xc00068c1e0) Stream removed, broadcasting: 5\n" Mar 29 21:47:40.776: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 29 21:47:40.776: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 29 21:47:50.832: INFO: Waiting for StatefulSet statefulset-9046/ss2 to complete update Mar 29 21:47:50.832: INFO: Waiting for Pod statefulset-9046/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 29 21:47:50.832: INFO: Waiting for Pod statefulset-9046/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 29 21:48:00.855: INFO: Waiting for StatefulSet statefulset-9046/ss2 to complete update Mar 29 21:48:00.855: INFO: Waiting for Pod statefulset-9046/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 29 21:48:11.130: INFO: Waiting for StatefulSet statefulset-9046/ss2 to complete update STEP: Rolling back to a previous revision Mar 29 21:48:20.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9046 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 29 21:48:21.103: INFO: stderr: "I0329 21:48:20.982695 1905 log.go:172] (0xc00065e6e0) (0xc000655f40) Create stream\nI0329 21:48:20.982750 1905 log.go:172] (0xc00065e6e0) (0xc000655f40) Stream added, broadcasting: 1\nI0329 21:48:20.985072 1905 log.go:172] (0xc00065e6e0) Reply frame received for 1\nI0329 21:48:20.985254 1905 log.go:172] (0xc00065e6e0) (0xc0005ab540) Create stream\nI0329 21:48:20.985277 1905 log.go:172] (0xc00065e6e0) (0xc0005ab540) Stream added, broadcasting: 3\nI0329 21:48:20.986211 1905 log.go:172] (0xc00065e6e0) Reply frame received for 3\nI0329 21:48:20.986256 1905 log.go:172] (0xc00065e6e0) (0xc000140000) Create stream\nI0329 21:48:20.986275 1905 log.go:172] (0xc00065e6e0) (0xc000140000) Stream added, broadcasting: 5\nI0329 21:48:20.987293 1905 log.go:172] (0xc00065e6e0) Reply frame received for 5\nI0329 21:48:21.069365 1905 log.go:172] (0xc00065e6e0) Data frame received for 5\nI0329 21:48:21.069400 1905 log.go:172] (0xc000140000) (5) Data frame handling\nI0329 21:48:21.069423 1905 log.go:172] (0xc000140000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0329 21:48:21.095947 1905 log.go:172] (0xc00065e6e0) Data frame received for 3\nI0329 21:48:21.095983 1905 log.go:172] (0xc0005ab540) (3) Data frame handling\nI0329 21:48:21.096014 1905 log.go:172] (0xc0005ab540) (3) Data frame sent\nI0329 21:48:21.096033 1905 log.go:172] (0xc00065e6e0) Data frame received for 3\nI0329 21:48:21.096145 1905 log.go:172] (0xc0005ab540) (3) Data frame handling\nI0329 21:48:21.096342 1905 log.go:172] (0xc00065e6e0) Data frame received for 5\nI0329 21:48:21.096376 1905 log.go:172] (0xc000140000) (5) Data frame handling\nI0329 21:48:21.098420 1905 log.go:172] (0xc00065e6e0) Data frame received for 1\nI0329 21:48:21.098453 1905 log.go:172] (0xc000655f40) (1) Data frame handling\nI0329 21:48:21.098485 1905 log.go:172] (0xc000655f40) (1) Data frame sent\nI0329 21:48:21.098502 1905 log.go:172] (0xc00065e6e0) (0xc000655f40) Stream removed, broadcasting: 1\nI0329 21:48:21.098524 1905 log.go:172] (0xc00065e6e0) Go away received\nI0329 21:48:21.098912 1905 log.go:172] (0xc00065e6e0) (0xc000655f40) Stream removed, broadcasting: 1\nI0329 21:48:21.098935 1905 log.go:172] (0xc00065e6e0) (0xc0005ab540) Stream removed, broadcasting: 3\nI0329 21:48:21.098948 1905 log.go:172] (0xc00065e6e0) (0xc000140000) Stream removed, broadcasting: 5\n" Mar 29 21:48:21.104: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 29 21:48:21.104: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 29 21:48:31.137: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 29 21:48:41.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9046 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 21:48:41.390: INFO: stderr: "I0329 21:48:41.293767 1926 log.go:172] (0xc0008ee0b0) (0xc000229540) Create stream\nI0329 21:48:41.293820 1926 log.go:172] (0xc0008ee0b0) (0xc000229540) Stream added, broadcasting: 1\nI0329 21:48:41.296281 1926 log.go:172] (0xc0008ee0b0) Reply frame received for 1\nI0329 21:48:41.296328 1926 log.go:172] (0xc0008ee0b0) (0xc000942000) Create stream\nI0329 21:48:41.296353 1926 log.go:172] (0xc0008ee0b0) (0xc000942000) Stream added, broadcasting: 3\nI0329 21:48:41.297738 1926 log.go:172] (0xc0008ee0b0) Reply frame received for 3\nI0329 21:48:41.297804 1926 log.go:172] (0xc0008ee0b0) (0xc000914000) Create stream\nI0329 21:48:41.297823 1926 log.go:172] (0xc0008ee0b0) (0xc000914000) Stream added, broadcasting: 5\nI0329 21:48:41.298822 1926 log.go:172] (0xc0008ee0b0) Reply frame received for 5\nI0329 21:48:41.383945 1926 log.go:172] (0xc0008ee0b0) Data frame received for 3\nI0329 21:48:41.383986 1926 log.go:172] (0xc0008ee0b0) Data frame received for 5\nI0329 21:48:41.384015 1926 log.go:172] (0xc000914000) (5) Data frame handling\nI0329 21:48:41.384029 1926 log.go:172] (0xc000914000) (5) Data frame sent\nI0329 21:48:41.384044 1926 log.go:172] (0xc0008ee0b0) Data frame received for 5\nI0329 21:48:41.384064 1926 log.go:172] (0xc000914000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0329 21:48:41.384084 1926 log.go:172] (0xc000942000) (3) Data frame handling\nI0329 21:48:41.384244 1926 log.go:172] (0xc000942000) (3) Data frame sent\nI0329 21:48:41.384266 1926 log.go:172] (0xc0008ee0b0) Data frame received for 3\nI0329 21:48:41.384287 1926 log.go:172] (0xc000942000) (3) Data frame handling\nI0329 21:48:41.385663 1926 log.go:172] (0xc0008ee0b0) Data frame received for 1\nI0329 21:48:41.385683 1926 log.go:172] (0xc000229540) (1) Data frame handling\nI0329 21:48:41.385691 1926 log.go:172] (0xc000229540) (1) Data frame sent\nI0329 21:48:41.385702 1926 log.go:172] (0xc0008ee0b0) (0xc000229540) Stream removed, broadcasting: 1\nI0329 21:48:41.385712 1926 log.go:172] (0xc0008ee0b0) Go away received\nI0329 21:48:41.386160 1926 log.go:172] (0xc0008ee0b0) (0xc000229540) Stream removed, broadcasting: 1\nI0329 21:48:41.386181 1926 log.go:172] (0xc0008ee0b0) (0xc000942000) Stream removed, broadcasting: 3\nI0329 21:48:41.386192 1926 log.go:172] (0xc0008ee0b0) (0xc000914000) Stream removed, broadcasting: 5\n" Mar 29 21:48:41.390: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 29 21:48:41.390: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 29 21:48:51.410: INFO: Waiting for StatefulSet statefulset-9046/ss2 to complete update Mar 29 21:48:51.411: INFO: Waiting for Pod statefulset-9046/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 29 21:48:51.411: INFO: Waiting for Pod statefulset-9046/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 29 21:49:01.419: INFO: Waiting for StatefulSet statefulset-9046/ss2 to complete update Mar 29 21:49:01.419: INFO: Waiting for Pod statefulset-9046/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 29 21:49:11.418: INFO: Deleting all statefulset in ns statefulset-9046 Mar 29 21:49:11.421: INFO: Scaling statefulset ss2 to 0 Mar 29 21:49:31.457: INFO: Waiting for statefulset status.replicas updated to 0 Mar 29 21:49:31.460: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:49:31.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9046" for this suite. • [SLOW TEST:151.377 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":198,"skipped":3473,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:49:31.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:49:31.525: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:49:35.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1721" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3482,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:49:35.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:49:35.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa430bef-a712-45ce-97cb-c97ec8d0e955" in namespace "projected-8302" to be "success or failure" Mar 29 21:49:35.791: INFO: Pod "downwardapi-volume-aa430bef-a712-45ce-97cb-c97ec8d0e955": Phase="Pending", Reason="", readiness=false. Elapsed: 13.738848ms Mar 29 21:49:37.827: INFO: Pod "downwardapi-volume-aa430bef-a712-45ce-97cb-c97ec8d0e955": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049859633s Mar 29 21:49:39.832: INFO: Pod "downwardapi-volume-aa430bef-a712-45ce-97cb-c97ec8d0e955": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053959746s STEP: Saw pod success Mar 29 21:49:39.832: INFO: Pod "downwardapi-volume-aa430bef-a712-45ce-97cb-c97ec8d0e955" satisfied condition "success or failure" Mar 29 21:49:39.835: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-aa430bef-a712-45ce-97cb-c97ec8d0e955 container client-container: STEP: delete the pod Mar 29 21:49:39.896: INFO: Waiting for pod downwardapi-volume-aa430bef-a712-45ce-97cb-c97ec8d0e955 to disappear Mar 29 21:49:39.911: INFO: Pod downwardapi-volume-aa430bef-a712-45ce-97cb-c97ec8d0e955 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:49:39.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8302" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3496,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:49:39.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2794 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 29 21:49:39.982: INFO: Found 0 stateful pods, waiting for 3 Mar 29 21:49:49.989: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:49:49.989: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:49:49.989: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 29 21:49:50.010: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 29 21:50:00.059: INFO: Updating stateful set ss2 Mar 29 21:50:00.085: INFO: Waiting for Pod statefulset-2794/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 29 21:50:10.235: INFO: Found 2 stateful pods, waiting for 3 Mar 29 21:50:20.240: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:50:20.240: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:50:20.240: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 29 21:50:20.264: INFO: Updating stateful set ss2 Mar 29 21:50:20.315: INFO: Waiting for Pod statefulset-2794/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 29 21:50:30.340: INFO: Updating stateful set ss2 Mar 29 21:50:30.355: INFO: Waiting for StatefulSet statefulset-2794/ss2 to complete update Mar 29 21:50:30.355: INFO: Waiting for Pod statefulset-2794/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 29 21:50:40.363: INFO: Waiting for StatefulSet statefulset-2794/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 29 21:50:50.363: INFO: Deleting all statefulset in ns statefulset-2794 Mar 29 21:50:50.366: INFO: Scaling statefulset ss2 to 0 Mar 29 21:51:10.403: INFO: Waiting for statefulset status.replicas updated to 0 Mar 29 21:51:10.406: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:51:10.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2794" for this suite. • [SLOW TEST:90.517 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":201,"skipped":3499,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:51:10.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8357 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8357 I0329 21:51:10.608142 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8357, replica count: 2 I0329 21:51:13.658599 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0329 21:51:16.658850 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 29 21:51:16.658: INFO: Creating new exec pod Mar 29 21:51:21.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8357 execpodzmzhg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 29 21:51:21.903: INFO: stderr: "I0329 21:51:21.799221 1948 log.go:172] (0xc0006749a0) (0xc0006701e0) Create stream\nI0329 21:51:21.799295 1948 log.go:172] (0xc0006749a0) (0xc0006701e0) Stream added, broadcasting: 1\nI0329 21:51:21.812182 1948 log.go:172] (0xc0006749a0) Reply frame received for 1\nI0329 21:51:21.812235 1948 log.go:172] (0xc0006749a0) (0xc0005a4000) Create stream\nI0329 21:51:21.812248 1948 log.go:172] (0xc0006749a0) (0xc0005a4000) Stream added, broadcasting: 3\nI0329 21:51:21.813236 1948 log.go:172] (0xc0006749a0) Reply frame received for 3\nI0329 21:51:21.813273 1948 log.go:172] (0xc0006749a0) (0xc000670280) Create stream\nI0329 21:51:21.813287 1948 log.go:172] (0xc0006749a0) (0xc000670280) Stream added, broadcasting: 5\nI0329 21:51:21.813978 1948 log.go:172] (0xc0006749a0) Reply frame received for 5\nI0329 21:51:21.896589 1948 log.go:172] (0xc0006749a0) Data frame received for 3\nI0329 21:51:21.896614 1948 log.go:172] (0xc0005a4000) (3) Data frame handling\nI0329 21:51:21.896727 1948 log.go:172] (0xc0006749a0) Data frame received for 5\nI0329 21:51:21.896765 1948 log.go:172] (0xc000670280) (5) Data frame handling\nI0329 21:51:21.896789 1948 log.go:172] (0xc000670280) (5) Data frame sent\nI0329 21:51:21.896803 1948 log.go:172] (0xc0006749a0) Data frame received for 5\nI0329 21:51:21.896819 1948 log.go:172] (0xc000670280) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0329 21:51:21.899055 1948 log.go:172] (0xc0006749a0) Data frame received for 1\nI0329 21:51:21.899079 1948 log.go:172] (0xc0006701e0) (1) Data frame handling\nI0329 21:51:21.899091 1948 log.go:172] (0xc0006701e0) (1) Data frame sent\nI0329 21:51:21.899108 1948 log.go:172] (0xc0006749a0) (0xc0006701e0) Stream removed, broadcasting: 1\nI0329 21:51:21.899121 1948 log.go:172] (0xc0006749a0) Go away received\nI0329 21:51:21.899592 1948 log.go:172] (0xc0006749a0) (0xc0006701e0) Stream removed, broadcasting: 1\nI0329 21:51:21.899621 1948 log.go:172] (0xc0006749a0) (0xc0005a4000) Stream removed, broadcasting: 3\nI0329 21:51:21.899642 1948 log.go:172] (0xc0006749a0) (0xc000670280) Stream removed, broadcasting: 5\n" Mar 29 21:51:21.903: INFO: stdout: "" Mar 29 21:51:21.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8357 execpodzmzhg -- /bin/sh -x -c nc -zv -t -w 2 10.107.140.146 80' Mar 29 21:51:22.091: INFO: stderr: "I0329 21:51:22.027605 1972 log.go:172] (0xc000558160) (0xc0006e9a40) Create stream\nI0329 21:51:22.027647 1972 log.go:172] (0xc000558160) (0xc0006e9a40) Stream added, broadcasting: 1\nI0329 21:51:22.030204 1972 log.go:172] (0xc000558160) Reply frame received for 1\nI0329 21:51:22.030268 1972 log.go:172] (0xc000558160) (0xc000baa000) Create stream\nI0329 21:51:22.030296 1972 log.go:172] (0xc000558160) (0xc000baa000) Stream added, broadcasting: 3\nI0329 21:51:22.031330 1972 log.go:172] (0xc000558160) Reply frame received for 3\nI0329 21:51:22.031374 1972 log.go:172] (0xc000558160) (0xc000a02000) Create stream\nI0329 21:51:22.031387 1972 log.go:172] (0xc000558160) (0xc000a02000) Stream added, broadcasting: 5\nI0329 21:51:22.032464 1972 log.go:172] (0xc000558160) Reply frame received for 5\nI0329 21:51:22.085367 1972 log.go:172] (0xc000558160) Data frame received for 5\nI0329 21:51:22.085427 1972 log.go:172] (0xc000a02000) (5) Data frame handling\nI0329 21:51:22.085453 1972 log.go:172] (0xc000a02000) (5) Data frame sent\nI0329 21:51:22.085473 1972 log.go:172] (0xc000558160) Data frame received for 5\nI0329 21:51:22.085491 1972 log.go:172] (0xc000a02000) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.140.146 80\nConnection to 10.107.140.146 80 port [tcp/http] succeeded!\nI0329 21:51:22.085529 1972 log.go:172] (0xc000558160) Data frame received for 3\nI0329 21:51:22.085542 1972 log.go:172] (0xc000baa000) (3) Data frame handling\nI0329 21:51:22.086771 1972 log.go:172] (0xc000558160) Data frame received for 1\nI0329 21:51:22.086791 1972 log.go:172] (0xc0006e9a40) (1) Data frame handling\nI0329 21:51:22.086804 1972 log.go:172] (0xc0006e9a40) (1) Data frame sent\nI0329 21:51:22.086976 1972 log.go:172] (0xc000558160) (0xc0006e9a40) Stream removed, broadcasting: 1\nI0329 21:51:22.087032 1972 log.go:172] (0xc000558160) Go away received\nI0329 21:51:22.087264 1972 log.go:172] (0xc000558160) (0xc0006e9a40) Stream removed, broadcasting: 1\nI0329 21:51:22.087284 1972 log.go:172] (0xc000558160) (0xc000baa000) Stream removed, broadcasting: 3\nI0329 21:51:22.087294 1972 log.go:172] (0xc000558160) (0xc000a02000) Stream removed, broadcasting: 5\n" Mar 29 21:51:22.092: INFO: stdout: "" Mar 29 21:51:22.092: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:51:22.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8357" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.736 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":202,"skipped":3517,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:51:22.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 29 21:51:22.233: INFO: Waiting up to 5m0s for pod "pod-9d56adce-3770-40c4-b6d6-7fed8f03b074" in namespace "emptydir-1176" to be "success or failure" Mar 29 21:51:22.236: INFO: Pod "pod-9d56adce-3770-40c4-b6d6-7fed8f03b074": Phase="Pending", Reason="", readiness=false. Elapsed: 3.771422ms Mar 29 21:51:24.241: INFO: Pod "pod-9d56adce-3770-40c4-b6d6-7fed8f03b074": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008185731s Mar 29 21:51:26.245: INFO: Pod "pod-9d56adce-3770-40c4-b6d6-7fed8f03b074": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012577546s STEP: Saw pod success Mar 29 21:51:26.245: INFO: Pod "pod-9d56adce-3770-40c4-b6d6-7fed8f03b074" satisfied condition "success or failure" Mar 29 21:51:26.248: INFO: Trying to get logs from node jerma-worker pod pod-9d56adce-3770-40c4-b6d6-7fed8f03b074 container test-container: STEP: delete the pod Mar 29 21:51:26.293: INFO: Waiting for pod pod-9d56adce-3770-40c4-b6d6-7fed8f03b074 to disappear Mar 29 21:51:26.315: INFO: Pod pod-9d56adce-3770-40c4-b6d6-7fed8f03b074 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:51:26.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1176" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3536,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:51:26.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-bk8w STEP: Creating a pod to test atomic-volume-subpath Mar 29 21:51:26.405: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bk8w" in namespace "subpath-2264" to be "success or failure" Mar 29 21:51:26.446: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Pending", Reason="", readiness=false. Elapsed: 41.207633ms Mar 29 21:51:28.450: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045113697s Mar 29 21:51:30.455: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Running", Reason="", readiness=true. Elapsed: 4.049384046s Mar 29 21:51:32.459: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Running", Reason="", readiness=true. Elapsed: 6.053281166s Mar 29 21:51:34.463: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Running", Reason="", readiness=true. Elapsed: 8.057428686s Mar 29 21:51:36.466: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Running", Reason="", readiness=true. Elapsed: 10.060984049s Mar 29 21:51:38.470: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Running", Reason="", readiness=true. Elapsed: 12.065148367s Mar 29 21:51:40.475: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Running", Reason="", readiness=true. Elapsed: 14.069581146s Mar 29 21:51:42.479: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Running", Reason="", readiness=true. Elapsed: 16.07382053s Mar 29 21:51:44.483: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Running", Reason="", readiness=true. Elapsed: 18.0774329s Mar 29 21:51:46.486: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Running", Reason="", readiness=true. Elapsed: 20.081199918s Mar 29 21:51:48.491: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Running", Reason="", readiness=true. Elapsed: 22.085558276s Mar 29 21:51:50.495: INFO: Pod "pod-subpath-test-configmap-bk8w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.089798297s STEP: Saw pod success Mar 29 21:51:50.495: INFO: Pod "pod-subpath-test-configmap-bk8w" satisfied condition "success or failure" Mar 29 21:51:50.499: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-bk8w container test-container-subpath-configmap-bk8w: STEP: delete the pod Mar 29 21:51:50.545: INFO: Waiting for pod pod-subpath-test-configmap-bk8w to disappear Mar 29 21:51:50.560: INFO: Pod pod-subpath-test-configmap-bk8w no longer exists STEP: Deleting pod pod-subpath-test-configmap-bk8w Mar 29 21:51:50.560: INFO: Deleting pod "pod-subpath-test-configmap-bk8w" in namespace "subpath-2264" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:51:50.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2264" for this suite. • [SLOW TEST:24.249 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":204,"skipped":3539,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:51:50.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-7ad388f5-0430-48d2-878f-98a42211cf40 STEP: Creating secret with name s-test-opt-upd-45d035d4-fdd4-4e49-8471-1b22429dcc5d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7ad388f5-0430-48d2-878f-98a42211cf40 STEP: Updating secret s-test-opt-upd-45d035d4-fdd4-4e49-8471-1b22429dcc5d STEP: Creating secret with name s-test-opt-create-cb9e837d-b4da-460d-877d-5b416075954e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:53:07.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7083" for this suite. • [SLOW TEST:76.541 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3551,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:53:07.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 29 21:53:11.682: INFO: Successfully updated pod "adopt-release-4blxr" STEP: Checking that the Job readopts the Pod Mar 29 21:53:11.682: INFO: Waiting up to 15m0s for pod "adopt-release-4blxr" in namespace "job-9749" to be "adopted" Mar 29 21:53:11.700: INFO: Pod "adopt-release-4blxr": Phase="Running", Reason="", readiness=true. Elapsed: 18.020586ms Mar 29 21:53:13.704: INFO: Pod "adopt-release-4blxr": Phase="Running", Reason="", readiness=true. Elapsed: 2.022197871s Mar 29 21:53:13.704: INFO: Pod "adopt-release-4blxr" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 29 21:53:14.243: INFO: Successfully updated pod "adopt-release-4blxr" STEP: Checking that the Job releases the Pod Mar 29 21:53:14.243: INFO: Waiting up to 15m0s for pod "adopt-release-4blxr" in namespace "job-9749" to be "released" Mar 29 21:53:14.250: INFO: Pod "adopt-release-4blxr": Phase="Running", Reason="", readiness=true. Elapsed: 6.83672ms Mar 29 21:53:16.254: INFO: Pod "adopt-release-4blxr": Phase="Running", Reason="", readiness=true. Elapsed: 2.010612187s Mar 29 21:53:16.254: INFO: Pod "adopt-release-4blxr" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:53:16.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9749" for this suite. • [SLOW TEST:9.149 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":206,"skipped":3568,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:53:16.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 STEP: creating the pod Mar 29 21:53:16.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9099' Mar 29 21:53:18.925: INFO: stderr: "" Mar 29 21:53:18.925: INFO: stdout: "pod/pause created\n" Mar 29 21:53:18.925: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 29 21:53:18.925: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9099" to be "running and ready" Mar 29 21:53:18.945: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 20.364778ms Mar 29 21:53:20.949: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02352647s Mar 29 21:53:22.953: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.027504997s Mar 29 21:53:22.953: INFO: Pod "pause" satisfied condition "running and ready" Mar 29 21:53:22.953: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 29 21:53:22.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9099' Mar 29 21:53:23.058: INFO: stderr: "" Mar 29 21:53:23.058: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 29 21:53:23.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9099' Mar 29 21:53:23.150: INFO: stderr: "" Mar 29 21:53:23.150: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 29 21:53:23.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9099' Mar 29 21:53:23.244: INFO: stderr: "" Mar 29 21:53:23.244: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 29 21:53:23.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9099' Mar 29 21:53:23.336: INFO: stderr: "" Mar 29 21:53:23.336: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 STEP: using delete to clean up resources Mar 29 21:53:23.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9099' Mar 29 21:53:23.465: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 29 21:53:23.465: INFO: stdout: "pod \"pause\" force deleted\n" Mar 29 21:53:23.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9099' Mar 29 21:53:23.675: INFO: stderr: "No resources found in kubectl-9099 namespace.\n" Mar 29 21:53:23.675: INFO: stdout: "" Mar 29 21:53:23.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9099 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 29 21:53:23.774: INFO: stderr: "" Mar 29 21:53:23.774: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:53:23.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9099" for this suite. • [SLOW TEST:7.520 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1379 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":207,"skipped":3570,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:53:23.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:53:24.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4802" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":208,"skipped":3577,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:53:24.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 29 21:53:24.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6725' Mar 29 21:53:24.750: INFO: stderr: "" Mar 29 21:53:24.750: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 29 21:53:24.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6725' Mar 29 21:53:24.851: INFO: stderr: "" Mar 29 21:53:24.852: INFO: stdout: "update-demo-nautilus-dq7nr update-demo-nautilus-xb5mr " Mar 29 21:53:24.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dq7nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6725' Mar 29 21:53:24.959: INFO: stderr: "" Mar 29 21:53:24.959: INFO: stdout: "" Mar 29 21:53:24.959: INFO: update-demo-nautilus-dq7nr is created but not running Mar 29 21:53:29.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6725' Mar 29 21:53:30.068: INFO: stderr: "" Mar 29 21:53:30.068: INFO: stdout: "update-demo-nautilus-dq7nr update-demo-nautilus-xb5mr " Mar 29 21:53:30.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dq7nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6725' Mar 29 21:53:30.162: INFO: stderr: "" Mar 29 21:53:30.162: INFO: stdout: "true" Mar 29 21:53:30.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dq7nr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6725' Mar 29 21:53:30.251: INFO: stderr: "" Mar 29 21:53:30.251: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 29 21:53:30.251: INFO: validating pod update-demo-nautilus-dq7nr Mar 29 21:53:30.255: INFO: got data: { "image": "nautilus.jpg" } Mar 29 21:53:30.255: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 29 21:53:30.255: INFO: update-demo-nautilus-dq7nr is verified up and running Mar 29 21:53:30.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xb5mr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6725' Mar 29 21:53:30.342: INFO: stderr: "" Mar 29 21:53:30.342: INFO: stdout: "true" Mar 29 21:53:30.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xb5mr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6725' Mar 29 21:53:30.430: INFO: stderr: "" Mar 29 21:53:30.430: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 29 21:53:30.430: INFO: validating pod update-demo-nautilus-xb5mr Mar 29 21:53:30.434: INFO: got data: { "image": "nautilus.jpg" } Mar 29 21:53:30.434: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 29 21:53:30.434: INFO: update-demo-nautilus-xb5mr is verified up and running STEP: rolling-update to new replication controller Mar 29 21:53:30.442: INFO: scanned /root for discovery docs: Mar 29 21:53:30.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6725' Mar 29 21:53:53.009: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 29 21:53:53.009: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 29 21:53:53.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6725' Mar 29 21:53:53.099: INFO: stderr: "" Mar 29 21:53:53.099: INFO: stdout: "update-demo-kitten-pl6gm update-demo-kitten-w4fnl " Mar 29 21:53:53.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pl6gm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6725' Mar 29 21:53:53.198: INFO: stderr: "" Mar 29 21:53:53.198: INFO: stdout: "true" Mar 29 21:53:53.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pl6gm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6725' Mar 29 21:53:53.301: INFO: stderr: "" Mar 29 21:53:53.301: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 29 21:53:53.301: INFO: validating pod update-demo-kitten-pl6gm Mar 29 21:53:53.313: INFO: got data: { "image": "kitten.jpg" } Mar 29 21:53:53.313: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 29 21:53:53.313: INFO: update-demo-kitten-pl6gm is verified up and running Mar 29 21:53:53.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-w4fnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6725' Mar 29 21:53:53.411: INFO: stderr: "" Mar 29 21:53:53.411: INFO: stdout: "true" Mar 29 21:53:53.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-w4fnl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6725' Mar 29 21:53:53.525: INFO: stderr: "" Mar 29 21:53:53.525: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 29 21:53:53.525: INFO: validating pod update-demo-kitten-w4fnl Mar 29 21:53:53.528: INFO: got data: { "image": "kitten.jpg" } Mar 29 21:53:53.528: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 29 21:53:53.528: INFO: update-demo-kitten-w4fnl is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:53:53.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6725" for this suite. • [SLOW TEST:29.274 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":209,"skipped":3597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:53:53.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:53:53.634: INFO: Creating ReplicaSet my-hostname-basic-04320333-7172-46f0-bdfa-dd19ffe81200 Mar 29 21:53:53.653: INFO: Pod name my-hostname-basic-04320333-7172-46f0-bdfa-dd19ffe81200: Found 0 pods out of 1 Mar 29 21:53:58.673: INFO: Pod name my-hostname-basic-04320333-7172-46f0-bdfa-dd19ffe81200: Found 1 pods out of 1 Mar 29 21:53:58.673: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-04320333-7172-46f0-bdfa-dd19ffe81200" is running Mar 29 21:53:58.675: INFO: Pod "my-hostname-basic-04320333-7172-46f0-bdfa-dd19ffe81200-d2zr7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-29 21:53:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-29 21:53:55 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-29 21:53:55 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-29 21:53:53 +0000 UTC Reason: Message:}]) Mar 29 21:53:58.675: INFO: Trying to dial the pod Mar 29 21:54:03.688: INFO: Controller my-hostname-basic-04320333-7172-46f0-bdfa-dd19ffe81200: Got expected result from replica 1 [my-hostname-basic-04320333-7172-46f0-bdfa-dd19ffe81200-d2zr7]: "my-hostname-basic-04320333-7172-46f0-bdfa-dd19ffe81200-d2zr7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:54:03.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7080" for this suite. • [SLOW TEST:10.167 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":210,"skipped":3636,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:54:03.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6945 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 29 21:54:03.750: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 29 21:54:27.849: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.53:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6945 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:54:27.849: INFO: >>> kubeConfig: /root/.kube/config I0329 21:54:27.885456 6 log.go:172] (0xc0014262c0) (0xc00107b220) Create stream I0329 21:54:27.885485 6 log.go:172] (0xc0014262c0) (0xc00107b220) Stream added, broadcasting: 1 I0329 21:54:27.888094 6 log.go:172] (0xc0014262c0) Reply frame received for 1 I0329 21:54:27.888140 6 log.go:172] (0xc0014262c0) (0xc00183c000) Create stream I0329 21:54:27.888156 6 log.go:172] (0xc0014262c0) (0xc00183c000) Stream added, broadcasting: 3 I0329 21:54:27.889104 6 log.go:172] (0xc0014262c0) Reply frame received for 3 I0329 21:54:27.889306 6 log.go:172] (0xc0014262c0) (0xc000dccf00) Create stream I0329 21:54:27.889324 6 log.go:172] (0xc0014262c0) (0xc000dccf00) Stream added, broadcasting: 5 I0329 21:54:27.890391 6 log.go:172] (0xc0014262c0) Reply frame received for 5 I0329 21:54:27.954514 6 log.go:172] (0xc0014262c0) Data frame received for 3 I0329 21:54:27.954551 6 log.go:172] (0xc00183c000) (3) Data frame handling I0329 21:54:27.954564 6 log.go:172] (0xc00183c000) (3) Data frame sent I0329 21:54:27.954572 6 log.go:172] (0xc0014262c0) Data frame received for 3 I0329 21:54:27.954581 6 log.go:172] (0xc00183c000) (3) Data frame handling I0329 21:54:27.954599 6 log.go:172] (0xc0014262c0) Data frame received for 5 I0329 21:54:27.954611 6 log.go:172] (0xc000dccf00) (5) Data frame handling I0329 21:54:27.956037 6 log.go:172] (0xc0014262c0) Data frame received for 1 I0329 21:54:27.956078 6 log.go:172] (0xc00107b220) (1) Data frame handling I0329 21:54:27.956109 6 log.go:172] (0xc00107b220) (1) Data frame sent I0329 21:54:27.956155 6 log.go:172] (0xc0014262c0) (0xc00107b220) Stream removed, broadcasting: 1 I0329 21:54:27.956310 6 log.go:172] (0xc0014262c0) (0xc00107b220) Stream removed, broadcasting: 1 I0329 21:54:27.956346 6 log.go:172] (0xc0014262c0) Go away received I0329 21:54:27.956387 6 log.go:172] (0xc0014262c0) (0xc00183c000) Stream removed, broadcasting: 3 I0329 21:54:27.956413 6 log.go:172] (0xc0014262c0) (0xc000dccf00) Stream removed, broadcasting: 5 Mar 29 21:54:27.956: INFO: Found all expected endpoints: [netserver-0] Mar 29 21:54:27.959: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.84:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6945 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 29 21:54:27.959: INFO: >>> kubeConfig: /root/.kube/config I0329 21:54:27.985727 6 log.go:172] (0xc0014268f0) (0xc0008a0140) Create stream I0329 21:54:27.985750 6 log.go:172] (0xc0014268f0) (0xc0008a0140) Stream added, broadcasting: 1 I0329 21:54:27.988140 6 log.go:172] (0xc0014268f0) Reply frame received for 1 I0329 21:54:27.988165 6 log.go:172] (0xc0014268f0) (0xc000dcd2c0) Create stream I0329 21:54:27.988174 6 log.go:172] (0xc0014268f0) (0xc000dcd2c0) Stream added, broadcasting: 3 I0329 21:54:27.989083 6 log.go:172] (0xc0014268f0) Reply frame received for 3 I0329 21:54:27.989239 6 log.go:172] (0xc0014268f0) (0xc000dcd540) Create stream I0329 21:54:27.989258 6 log.go:172] (0xc0014268f0) (0xc000dcd540) Stream added, broadcasting: 5 I0329 21:54:27.990164 6 log.go:172] (0xc0014268f0) Reply frame received for 5 I0329 21:54:28.062589 6 log.go:172] (0xc0014268f0) Data frame received for 3 I0329 21:54:28.062633 6 log.go:172] (0xc000dcd2c0) (3) Data frame handling I0329 21:54:28.062652 6 log.go:172] (0xc000dcd2c0) (3) Data frame sent I0329 21:54:28.062665 6 log.go:172] (0xc0014268f0) Data frame received for 3 I0329 21:54:28.062676 6 log.go:172] (0xc000dcd2c0) (3) Data frame handling I0329 21:54:28.062749 6 log.go:172] (0xc0014268f0) Data frame received for 5 I0329 21:54:28.062798 6 log.go:172] (0xc000dcd540) (5) Data frame handling I0329 21:54:28.064260 6 log.go:172] (0xc0014268f0) Data frame received for 1 I0329 21:54:28.064280 6 log.go:172] (0xc0008a0140) (1) Data frame handling I0329 21:54:28.064295 6 log.go:172] (0xc0008a0140) (1) Data frame sent I0329 21:54:28.064862 6 log.go:172] (0xc0014268f0) (0xc0008a0140) Stream removed, broadcasting: 1 I0329 21:54:28.064905 6 log.go:172] (0xc0014268f0) Go away received I0329 21:54:28.064952 6 log.go:172] (0xc0014268f0) (0xc0008a0140) Stream removed, broadcasting: 1 I0329 21:54:28.064969 6 log.go:172] (0xc0014268f0) (0xc000dcd2c0) Stream removed, broadcasting: 3 I0329 21:54:28.064978 6 log.go:172] (0xc0014268f0) (0xc000dcd540) Stream removed, broadcasting: 5 Mar 29 21:54:28.064: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:54:28.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6945" for this suite. • [SLOW TEST:24.377 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3640,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:54:28.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 29 21:54:28.122: INFO: >>> kubeConfig: /root/.kube/config Mar 29 21:54:31.021: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:54:41.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3655" for this suite. • [SLOW TEST:13.409 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":212,"skipped":3642,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:54:41.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7495.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7495.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7495.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7495.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7495.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7495.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 29 21:54:47.598: INFO: DNS probes using dns-7495/dns-test-e2ae40ee-7cc8-44bd-af23-765135f0036e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:54:47.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7495" for this suite. • [SLOW TEST:6.244 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":213,"skipped":3645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:54:47.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:54:47.948: INFO: Waiting up to 5m0s for pod "busybox-user-65534-815b3ec7-06eb-46fe-960f-f304042cb46a" in namespace "security-context-test-6931" to be "success or failure" Mar 29 21:54:47.958: INFO: Pod "busybox-user-65534-815b3ec7-06eb-46fe-960f-f304042cb46a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.257797ms Mar 29 21:54:49.962: INFO: Pod "busybox-user-65534-815b3ec7-06eb-46fe-960f-f304042cb46a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014180082s Mar 29 21:54:51.966: INFO: Pod "busybox-user-65534-815b3ec7-06eb-46fe-960f-f304042cb46a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018201207s Mar 29 21:54:51.966: INFO: Pod "busybox-user-65534-815b3ec7-06eb-46fe-960f-f304042cb46a" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:54:51.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6931" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3682,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:54:51.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 29 21:54:52.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5298' Mar 29 21:54:52.374: INFO: stderr: "" Mar 29 21:54:52.374: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 29 21:54:53.379: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 21:54:53.379: INFO: Found 0 / 1 Mar 29 21:54:54.390: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 21:54:54.390: INFO: Found 0 / 1 Mar 29 21:54:55.378: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 21:54:55.378: INFO: Found 1 / 1 Mar 29 21:54:55.378: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 29 21:54:55.381: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 21:54:55.381: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 29 21:54:55.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-8vtqm --namespace=kubectl-5298 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 29 21:54:55.488: INFO: stderr: "" Mar 29 21:54:55.488: INFO: stdout: "pod/agnhost-master-8vtqm patched\n" STEP: checking annotations Mar 29 21:54:55.498: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 21:54:55.498: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:54:55.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5298" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":215,"skipped":3689,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:54:55.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-2bac08af-9cc1-43eb-9d85-37118e448a31 Mar 29 21:54:55.589: INFO: Pod name my-hostname-basic-2bac08af-9cc1-43eb-9d85-37118e448a31: Found 0 pods out of 1 Mar 29 21:55:00.592: INFO: Pod name my-hostname-basic-2bac08af-9cc1-43eb-9d85-37118e448a31: Found 1 pods out of 1 Mar 29 21:55:00.592: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2bac08af-9cc1-43eb-9d85-37118e448a31" are running Mar 29 21:55:00.595: INFO: Pod "my-hostname-basic-2bac08af-9cc1-43eb-9d85-37118e448a31-jq4kc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-29 21:54:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-29 21:54:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-29 21:54:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-29 21:54:55 +0000 UTC Reason: Message:}]) Mar 29 21:55:00.595: INFO: Trying to dial the pod Mar 29 21:55:05.607: INFO: Controller my-hostname-basic-2bac08af-9cc1-43eb-9d85-37118e448a31: Got expected result from replica 1 [my-hostname-basic-2bac08af-9cc1-43eb-9d85-37118e448a31-jq4kc]: "my-hostname-basic-2bac08af-9cc1-43eb-9d85-37118e448a31-jq4kc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:55:05.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5842" for this suite. • [SLOW TEST:10.108 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":216,"skipped":3698,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:55:05.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-dbc04ef3-6f23-4a74-9be1-3f489b8c6e1e STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:55:11.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-918" for this suite. • [SLOW TEST:6.254 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:55:11.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 29 21:55:11.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5333' Mar 29 21:55:12.022: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 29 21:55:12.022: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793 Mar 29 21:55:12.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5333' Mar 29 21:55:12.118: INFO: stderr: "" Mar 29 21:55:12.118: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:55:12.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5333" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":218,"skipped":3729,"failed":0} S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:55:12.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-2bc0d80c-c65c-4fe7-8016-b3a256228114 STEP: Creating secret with name secret-projected-all-test-volume-df2ca6dd-e421-4521-8413-4b2ca452435d STEP: Creating a pod to test Check all projections for projected volume plugin Mar 29 21:55:12.230: INFO: Waiting up to 5m0s for pod "projected-volume-a5b6c4f7-1d71-4907-bb90-7dc7c7cc0c21" in namespace "projected-672" to be "success or failure" Mar 29 21:55:12.248: INFO: Pod "projected-volume-a5b6c4f7-1d71-4907-bb90-7dc7c7cc0c21": Phase="Pending", Reason="", readiness=false. Elapsed: 17.841975ms Mar 29 21:55:14.252: INFO: Pod "projected-volume-a5b6c4f7-1d71-4907-bb90-7dc7c7cc0c21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021775167s Mar 29 21:55:16.256: INFO: Pod "projected-volume-a5b6c4f7-1d71-4907-bb90-7dc7c7cc0c21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025703827s STEP: Saw pod success Mar 29 21:55:16.256: INFO: Pod "projected-volume-a5b6c4f7-1d71-4907-bb90-7dc7c7cc0c21" satisfied condition "success or failure" Mar 29 21:55:16.259: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-a5b6c4f7-1d71-4907-bb90-7dc7c7cc0c21 container projected-all-volume-test: STEP: delete the pod Mar 29 21:55:16.296: INFO: Waiting for pod projected-volume-a5b6c4f7-1d71-4907-bb90-7dc7c7cc0c21 to disappear Mar 29 21:55:16.300: INFO: Pod projected-volume-a5b6c4f7-1d71-4907-bb90-7dc7c7cc0c21 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:55:16.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-672" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3730,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:55:16.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 29 21:55:16.385: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 29 21:55:17.108: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 29 21:55:19.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721115717, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721115717, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721115717, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721115717, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 29 21:55:21.872: INFO: Waited 618.020084ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:55:22.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1913" for this suite. • [SLOW TEST:6.198 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":220,"skipped":3736,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:55:22.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:55:22.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2070" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":221,"skipped":3745,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:55:22.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-ecd67961-f7b3-4b30-9a04-740aaeb635c0 STEP: Creating a pod to test consume secrets Mar 29 21:55:22.871: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-612cb20b-9947-4e0c-bc4d-0292ebc9f324" in namespace "projected-8530" to be "success or failure" Mar 29 21:55:22.935: INFO: Pod "pod-projected-secrets-612cb20b-9947-4e0c-bc4d-0292ebc9f324": Phase="Pending", Reason="", readiness=false. Elapsed: 63.584243ms Mar 29 21:55:24.939: INFO: Pod "pod-projected-secrets-612cb20b-9947-4e0c-bc4d-0292ebc9f324": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067585129s Mar 29 21:55:26.943: INFO: Pod "pod-projected-secrets-612cb20b-9947-4e0c-bc4d-0292ebc9f324": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071593181s STEP: Saw pod success Mar 29 21:55:26.943: INFO: Pod "pod-projected-secrets-612cb20b-9947-4e0c-bc4d-0292ebc9f324" satisfied condition "success or failure" Mar 29 21:55:26.946: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-612cb20b-9947-4e0c-bc4d-0292ebc9f324 container secret-volume-test: STEP: delete the pod Mar 29 21:55:27.009: INFO: Waiting for pod pod-projected-secrets-612cb20b-9947-4e0c-bc4d-0292ebc9f324 to disappear Mar 29 21:55:27.014: INFO: Pod pod-projected-secrets-612cb20b-9947-4e0c-bc4d-0292ebc9f324 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:55:27.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8530" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3745,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:55:27.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:55:27.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ec1f0ef-0450-49f9-9f8b-6cc2eec814fa" in namespace "projected-7765" to be "success or failure" Mar 29 21:55:27.104: INFO: Pod "downwardapi-volume-8ec1f0ef-0450-49f9-9f8b-6cc2eec814fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.234739ms Mar 29 21:55:29.108: INFO: Pod "downwardapi-volume-8ec1f0ef-0450-49f9-9f8b-6cc2eec814fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007112697s Mar 29 21:55:31.112: INFO: Pod "downwardapi-volume-8ec1f0ef-0450-49f9-9f8b-6cc2eec814fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011342658s STEP: Saw pod success Mar 29 21:55:31.112: INFO: Pod "downwardapi-volume-8ec1f0ef-0450-49f9-9f8b-6cc2eec814fa" satisfied condition "success or failure" Mar 29 21:55:31.118: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8ec1f0ef-0450-49f9-9f8b-6cc2eec814fa container client-container: STEP: delete the pod Mar 29 21:55:31.174: INFO: Waiting for pod downwardapi-volume-8ec1f0ef-0450-49f9-9f8b-6cc2eec814fa to disappear Mar 29 21:55:31.190: INFO: Pod downwardapi-volume-8ec1f0ef-0450-49f9-9f8b-6cc2eec814fa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:55:31.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7765" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:55:31.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 29 21:55:31.270: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-a bf5677e9-288e-447f-abe7-df67fe30b253 3799620 0 2020-03-29 21:55:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 29 21:55:31.271: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-a bf5677e9-288e-447f-abe7-df67fe30b253 3799620 0 2020-03-29 21:55:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 29 21:55:41.277: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-a bf5677e9-288e-447f-abe7-df67fe30b253 3799668 0 2020-03-29 21:55:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 29 21:55:41.277: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-a bf5677e9-288e-447f-abe7-df67fe30b253 3799668 0 2020-03-29 21:55:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 29 21:55:51.285: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-a bf5677e9-288e-447f-abe7-df67fe30b253 3799698 0 2020-03-29 21:55:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 29 21:55:51.285: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-a bf5677e9-288e-447f-abe7-df67fe30b253 3799698 0 2020-03-29 21:55:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 29 21:56:01.291: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-a bf5677e9-288e-447f-abe7-df67fe30b253 3799728 0 2020-03-29 21:55:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 29 21:56:01.291: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-a bf5677e9-288e-447f-abe7-df67fe30b253 3799728 0 2020-03-29 21:55:31 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 29 21:56:11.299: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-b f872deaf-578e-4a9a-a61a-d9d1f23582ac 3799758 0 2020-03-29 21:56:11 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 29 21:56:11.299: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-b f872deaf-578e-4a9a-a61a-d9d1f23582ac 3799758 0 2020-03-29 21:56:11 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 29 21:56:21.306: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-b f872deaf-578e-4a9a-a61a-d9d1f23582ac 3799788 0 2020-03-29 21:56:11 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 29 21:56:21.306: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6031 /api/v1/namespaces/watch-6031/configmaps/e2e-watch-test-configmap-b f872deaf-578e-4a9a-a61a-d9d1f23582ac 3799788 0 2020-03-29 21:56:11 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:56:31.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6031" for this suite. • [SLOW TEST:60.120 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":224,"skipped":3790,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:56:31.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:56:31.446: INFO: Creating deployment "webserver-deployment" Mar 29 21:56:31.450: INFO: Waiting for observed generation 1 Mar 29 21:56:33.464: INFO: Waiting for all required pods to come up Mar 29 21:56:33.468: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 29 21:56:41.482: INFO: Waiting for deployment "webserver-deployment" to complete Mar 29 21:56:41.485: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 29 21:56:41.489: INFO: Updating deployment webserver-deployment Mar 29 21:56:41.489: INFO: Waiting for observed generation 2 Mar 29 21:56:43.500: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 29 21:56:43.502: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 29 21:56:43.505: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 29 21:56:43.513: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 29 21:56:43.513: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 29 21:56:43.515: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 29 21:56:43.519: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 29 21:56:43.519: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 29 21:56:43.523: INFO: Updating deployment webserver-deployment Mar 29 21:56:43.523: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 29 21:56:43.551: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 29 21:56:43.685: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 29 21:56:43.767: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7979 /apis/apps/v1/namespaces/deployment-7979/deployments/webserver-deployment d3b6333c-f17f-44c7-9160-e1fae0f24ac4 3800036 3 2020-03-29 21:56:31 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003793e48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-29 21:56:41 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-29 21:56:43 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 29 21:56:43.865: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7979 /apis/apps/v1/namespaces/deployment-7979/replicasets/webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 3800083 3 2020-03-29 21:56:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment d3b6333c-f17f-44c7-9160-e1fae0f24ac4 0xc0035a2337 0xc0035a2338}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035a23a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 29 21:56:43.866: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 29 21:56:43.866: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7979 /apis/apps/v1/namespaces/deployment-7979/replicasets/webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 3800085 3 2020-03-29 21:56:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment d3b6333c-f17f-44c7-9160-e1fae0f24ac4 0xc0035a2277 0xc0035a2278}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035a22d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 29 21:56:43.970: INFO: Pod "webserver-deployment-595b5b9587-59xql" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-59xql webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-59xql bd52ddb8-5129-4c51-991f-25864a1da431 3800064 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a2837 0xc0035a2838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.971: INFO: Pod "webserver-deployment-595b5b9587-9p2sj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9p2sj webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-9p2sj 7c9907ce-c192-42ab-85d3-f30df83b7f31 3799935 0 2020-03-29 21:56:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a2957 0xc0035a2958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.57,StartTime:2020-03-29 21:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:56:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ee7189c2f00b03ae7e27600d00a6802b149cca9f77a76aef73ed0e0cc8c6a9f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.971: INFO: Pod "webserver-deployment-595b5b9587-cl22d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cl22d webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-cl22d c6eeff50-ee3f-44d9-9d97-9ef58efa268c 3800077 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a2ad7 0xc0035a2ad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.971: INFO: Pod "webserver-deployment-595b5b9587-f6787" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f6787 webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-f6787 5f7fe0f1-dc63-4d82-8602-f015559938f3 3799909 0 2020-03-29 21:56:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a2bf7 0xc0035a2bf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.93,StartTime:2020-03-29 21:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:56:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://910367da8c1f5cfb7007a0ad106b25966f6ef8a159421cbf1c03f35472bdc373,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.972: INFO: Pod "webserver-deployment-595b5b9587-g7zpp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g7zpp webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-g7zpp 6f39215e-5b23-4433-981d-9310048d911f 3800049 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a2d77 0xc0035a2d78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.972: INFO: Pod "webserver-deployment-595b5b9587-j5n5b" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-j5n5b webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-j5n5b 3947109d-3fd4-4abf-81f8-9bb87cf5d982 3799963 0 2020-03-29 21:56:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a2e97 0xc0035a2e98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.61,StartTime:2020-03-29 21:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:56:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2512ea46f036f9fd6fccc4c07866fdf2d13b5446638862c8fb1517cee75ab820,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.972: INFO: Pod "webserver-deployment-595b5b9587-jcrtc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jcrtc webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-jcrtc 6b15c633-e55b-45bd-86f2-f1598886271e 3799957 0 2020-03-29 21:56:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a3017 0xc0035a3018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.59,StartTime:2020-03-29 21:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:56:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7507a7faf5a1cb526caf1a9097bf15988ef786433cb63df2b4e58de5ae0ae67c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.973: INFO: Pod "webserver-deployment-595b5b9587-k6kmq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-k6kmq webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-k6kmq 08a10d69-29ec-42a5-a541-9d2df78cc40a 3800091 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a3197 0xc0035a3198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-29 21:56:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.973: INFO: Pod "webserver-deployment-595b5b9587-l2g99" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l2g99 webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-l2g99 6afb9fc4-bc12-4b8c-afcb-df3e49dbd20d 3799921 0 2020-03-29 21:56:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a32f7 0xc0035a32f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.94,StartTime:2020-03-29 21:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:56:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://211f1d0f2672f86a7dca952dba49b98710c473c7ce0f6ada530ce3bd0b7d0458,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.973: INFO: Pod "webserver-deployment-595b5b9587-lldzw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lldzw webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-lldzw e4073075-51a8-4d64-b751-d742e146fe25 3800076 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a3477 0xc0035a3478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.973: INFO: Pod "webserver-deployment-595b5b9587-mbzjb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mbzjb webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-mbzjb c65aea94-1229-4183-b653-a590e80153e3 3800062 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a3597 0xc0035a3598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.974: INFO: Pod "webserver-deployment-595b5b9587-nrmm6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nrmm6 webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-nrmm6 51e28ede-3021-40a3-ada2-6b7f6b3fcf77 3800075 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a36b7 0xc0035a36b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.974: INFO: Pod "webserver-deployment-595b5b9587-pml97" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pml97 webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-pml97 650aeaab-c186-4c58-a3a4-3b3ddf7dba14 3799943 0 2020-03-29 21:56:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a37d7 0xc0035a37d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.96,StartTime:2020-03-29 21:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:56:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aea6e06062568b7e7c4748005c0baf71edea5a73c3468b2bcf9a60df91f505de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.974: INFO: Pod "webserver-deployment-595b5b9587-q47dz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q47dz webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-q47dz c2164f2c-227a-449a-8ae3-dac1df2b948f 3800078 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a3957 0xc0035a3958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.974: INFO: Pod "webserver-deployment-595b5b9587-r5fx4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r5fx4 webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-r5fx4 72d56184-d21a-4a2e-a2d1-21dbc1f0bbe6 3800079 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a3a77 0xc0035a3a78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.975: INFO: Pod "webserver-deployment-595b5b9587-r7tlv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r7tlv webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-r7tlv d320cd5d-a662-4eb7-bb01-b18f76b3c584 3799930 0 2020-03-29 21:56:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a3b97 0xc0035a3b98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.58,StartTime:2020-03-29 21:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:56:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://db8155bf629da53945197a40fa470a142051a254c70ef62b287312a72a960de8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.975: INFO: Pod "webserver-deployment-595b5b9587-rhfcj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rhfcj webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-rhfcj 5b42c590-3496-493b-8107-f25fed2e1ee5 3800054 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a3d17 0xc0035a3d18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.975: INFO: Pod "webserver-deployment-595b5b9587-tq2tz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tq2tz webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-tq2tz 7d03735b-bcf1-4c8f-b895-31dcd16358d5 3799960 0 2020-03-29 21:56:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a3e37 0xc0035a3e38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.60,StartTime:2020-03-29 21:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 21:56:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fd21fa5b7204f5425ea283215de8ef87e92c38f5024265948c14a1dd61d53dfa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.975: INFO: Pod "webserver-deployment-595b5b9587-wsbrx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wsbrx webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-wsbrx d413b7c3-10f4-4006-84f0-96ec0b9f3c4f 3800052 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc0035a3fb7 0xc0035a3fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.975: INFO: Pod "webserver-deployment-595b5b9587-xl9c4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xl9c4 webserver-deployment-595b5b9587- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-595b5b9587-xl9c4 1f5c7de8-039a-4b10-9345-e20b63c7364b 3800066 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c8391dee-5186-4e1f-a386-7f54d399310c 0xc002e14127 0xc002e14128}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.975: INFO: Pod "webserver-deployment-c7997dcc8-2tj2s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2tj2s webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-2tj2s 801c924a-bdcf-45e7-9f8d-1a070a064ea5 3800073 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002e14257 0xc002e14258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.976: INFO: Pod "webserver-deployment-c7997dcc8-59v5x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-59v5x webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-59v5x 22c30420-0048-4695-8345-2bf16f66b429 3800022 0 2020-03-29 21:56:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002e14397 0xc002e14398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-29 21:56:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.976: INFO: Pod "webserver-deployment-c7997dcc8-98knv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-98knv webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-98knv 14a91f46-7ea8-4e5a-9a74-283bfc977645 3800020 0 2020-03-29 21:56:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002e14537 0xc002e14538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-29 21:56:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.976: INFO: Pod "webserver-deployment-c7997dcc8-b9xph" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b9xph webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-b9xph f1d51d12-7a4e-4ad5-b553-28e5d59f4683 3800023 0 2020-03-29 21:56:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002e146c7 0xc002e146c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-29 21:56:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.976: INFO: Pod "webserver-deployment-c7997dcc8-f8tr5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f8tr5 webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-f8tr5 21425976-51b5-4b64-b9b1-c1ee77c0c517 3800010 0 2020-03-29 21:56:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002e14847 0xc002e14848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-29 21:56:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.976: INFO: Pod "webserver-deployment-c7997dcc8-gvppg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gvppg webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-gvppg e1fee410-fb70-4fb9-8fb7-79829e08001f 3800074 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002e149e7 0xc002e149e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.977: INFO: Pod "webserver-deployment-c7997dcc8-lbsmt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lbsmt webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-lbsmt f8a2b7bd-751f-4c9b-b7c5-359f8072b7b7 3800057 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002e14be7 0xc002e14be8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.977: INFO: Pod "webserver-deployment-c7997dcc8-nx8jl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nx8jl webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-nx8jl c1d91696-e6ba-410c-a5c0-ca2848e87d16 3800081 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002e15017 0xc002e15018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.977: INFO: Pod "webserver-deployment-c7997dcc8-r6bjb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r6bjb webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-r6bjb aa86d5b0-2501-47b6-b733-e7cc0881333f 3800084 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002e15d17 0xc002e15d18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.977: INFO: Pod "webserver-deployment-c7997dcc8-sjdq7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sjdq7 webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-sjdq7 2698a780-158f-4e3d-8d87-aa00a3a2f42f 3799998 0 2020-03-29 21:56:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002e15ff7 0xc002e15ff8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-29 21:56:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.977: INFO: Pod "webserver-deployment-c7997dcc8-w5dgg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w5dgg webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-w5dgg 0d05d9b7-b59a-4528-8692-7eed91e0e54b 3800080 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002da8377 0xc002da8378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.977: INFO: Pod "webserver-deployment-c7997dcc8-wshfn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wshfn webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-wshfn e25a13b6-1b34-4839-b4e9-48c66e6d6742 3800090 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002da8637 0xc002da8638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-29 21:56:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 29 21:56:43.978: INFO: Pod "webserver-deployment-c7997dcc8-xmvwc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xmvwc webserver-deployment-c7997dcc8- deployment-7979 /api/v1/namespaces/deployment-7979/pods/webserver-deployment-c7997dcc8-xmvwc 2a93051c-dfbb-4db3-ac74-75c42faeeeda 3800065 0 2020-03-29 21:56:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 92047265-2e7c-44c8-8bc5-80b3e3a00800 0xc002da89c7 0xc002da89c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 21:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:56:43.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7979" for this suite. • [SLOW TEST:12.871 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":225,"skipped":3807,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:56:44.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:56:44.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 29 21:56:44.765: INFO: stderr: "" Mar 29 21:56:44.765: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:31:51Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:56:44.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2051" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":226,"skipped":3813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:56:44.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-6744/secret-test-1a0787d3-9454-4669-b874-87a19b94c76e STEP: Creating a pod to test consume secrets Mar 29 21:56:44.835: INFO: Waiting up to 5m0s for pod "pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c" in namespace "secrets-6744" to be "success or failure" Mar 29 21:56:44.855: INFO: Pod "pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.152244ms Mar 29 21:56:46.871: INFO: Pod "pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035962264s Mar 29 21:56:48.936: INFO: Pod "pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101723571s Mar 29 21:56:51.195: INFO: Pod "pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360206131s Mar 29 21:56:53.399: INFO: Pod "pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56430561s Mar 29 21:56:55.895: INFO: Pod "pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.060825201s Mar 29 21:56:58.128: INFO: Pod "pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.293802149s STEP: Saw pod success Mar 29 21:56:58.128: INFO: Pod "pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c" satisfied condition "success or failure" Mar 29 21:56:58.194: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c container env-test: STEP: delete the pod Mar 29 21:56:58.302: INFO: Waiting for pod pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c to disappear Mar 29 21:56:58.307: INFO: Pod pod-configmaps-5853c92f-970b-4130-a2f8-9e3336e00d1c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:56:58.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6744" for this suite. • [SLOW TEST:13.568 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3842,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:56:58.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 29 21:57:10.640: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 29 21:57:10.709: INFO: Pod pod-with-poststart-exec-hook still exists Mar 29 21:57:12.710: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 29 21:57:12.714: INFO: Pod pod-with-poststart-exec-hook still exists Mar 29 21:57:14.710: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 29 21:57:14.714: INFO: Pod pod-with-poststart-exec-hook still exists Mar 29 21:57:16.710: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 29 21:57:16.714: INFO: Pod pod-with-poststart-exec-hook still exists Mar 29 21:57:18.710: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 29 21:57:18.713: INFO: Pod pod-with-poststart-exec-hook still exists Mar 29 21:57:20.710: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 29 21:57:20.713: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:57:20.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9155" for this suite. • [SLOW TEST:22.381 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3845,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:57:20.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-eae6cfbc-2362-43f2-89be-cc7d88bf1512 in namespace container-probe-5845 Mar 29 21:57:24.823: INFO: Started pod busybox-eae6cfbc-2362-43f2-89be-cc7d88bf1512 in namespace container-probe-5845 STEP: checking the pod's current state and verifying that restartCount is present Mar 29 21:57:24.826: INFO: Initial restart count of pod busybox-eae6cfbc-2362-43f2-89be-cc7d88bf1512 is 0 Mar 29 21:58:12.928: INFO: Restart count of pod container-probe-5845/busybox-eae6cfbc-2362-43f2-89be-cc7d88bf1512 is now 1 (48.101103877s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:58:12.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5845" for this suite. • [SLOW TEST:52.271 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3855,"failed":0} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:58:12.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 21:58:13.067: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8493 I0329 21:58:13.080817 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8493, replica count: 1 I0329 21:58:14.131199 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0329 21:58:15.131419 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0329 21:58:16.131618 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 29 21:58:16.260: INFO: Created: latency-svc-d2g9z Mar 29 21:58:16.334: INFO: Got endpoints: latency-svc-d2g9z [103.027582ms] Mar 29 21:58:16.378: INFO: Created: latency-svc-vghfz Mar 29 21:58:16.387: INFO: Got endpoints: latency-svc-vghfz [52.155367ms] Mar 29 21:58:16.407: INFO: Created: latency-svc-kgvl7 Mar 29 21:58:16.417: INFO: Got endpoints: latency-svc-kgvl7 [82.01274ms] Mar 29 21:58:16.459: INFO: Created: latency-svc-f9hz7 Mar 29 21:58:16.482: INFO: Got endpoints: latency-svc-f9hz7 [147.691298ms] Mar 29 21:58:16.486: INFO: Created: latency-svc-6d8dh Mar 29 21:58:16.495: INFO: Got endpoints: latency-svc-6d8dh [160.204899ms] Mar 29 21:58:16.512: INFO: Created: latency-svc-g7489 Mar 29 21:58:16.525: INFO: Got endpoints: latency-svc-g7489 [190.407168ms] Mar 29 21:58:16.546: INFO: Created: latency-svc-qd4t2 Mar 29 21:58:16.602: INFO: Got endpoints: latency-svc-qd4t2 [267.58645ms] Mar 29 21:58:16.623: INFO: Created: latency-svc-sbl4h Mar 29 21:58:16.636: INFO: Got endpoints: latency-svc-sbl4h [301.610729ms] Mar 29 21:58:16.662: INFO: Created: latency-svc-bvzm5 Mar 29 21:58:16.679: INFO: Got endpoints: latency-svc-bvzm5 [344.07748ms] Mar 29 21:58:16.746: INFO: Created: latency-svc-hr89p Mar 29 21:58:16.750: INFO: Got endpoints: latency-svc-hr89p [415.327571ms] Mar 29 21:58:16.777: INFO: Created: latency-svc-lbk6m Mar 29 21:58:16.787: INFO: Got endpoints: latency-svc-lbk6m [452.317178ms] Mar 29 21:58:16.812: INFO: Created: latency-svc-x4b5v Mar 29 21:58:16.824: INFO: Got endpoints: latency-svc-x4b5v [488.718065ms] Mar 29 21:58:16.845: INFO: Created: latency-svc-gchp8 Mar 29 21:58:16.908: INFO: Got endpoints: latency-svc-gchp8 [572.856121ms] Mar 29 21:58:16.913: INFO: Created: latency-svc-5827j Mar 29 21:58:16.919: INFO: Got endpoints: latency-svc-5827j [584.41137ms] Mar 29 21:58:16.938: INFO: Created: latency-svc-k7v2k Mar 29 21:58:16.956: INFO: Got endpoints: latency-svc-k7v2k [621.083143ms] Mar 29 21:58:16.981: INFO: Created: latency-svc-n7mnx Mar 29 21:58:16.992: INFO: Got endpoints: latency-svc-n7mnx [657.258409ms] Mar 29 21:58:17.064: INFO: Created: latency-svc-lnpdp Mar 29 21:58:17.070: INFO: Got endpoints: latency-svc-lnpdp [683.576221ms] Mar 29 21:58:17.097: INFO: Created: latency-svc-5jdnb Mar 29 21:58:17.124: INFO: Got endpoints: latency-svc-5jdnb [707.769003ms] Mar 29 21:58:17.155: INFO: Created: latency-svc-zg8kd Mar 29 21:58:17.207: INFO: Got endpoints: latency-svc-zg8kd [724.435125ms] Mar 29 21:58:17.210: INFO: Created: latency-svc-ghbgt Mar 29 21:58:17.215: INFO: Got endpoints: latency-svc-ghbgt [720.281729ms] Mar 29 21:58:17.238: INFO: Created: latency-svc-vgg8m Mar 29 21:58:17.252: INFO: Got endpoints: latency-svc-vgg8m [726.347268ms] Mar 29 21:58:17.275: INFO: Created: latency-svc-vrb4n Mar 29 21:58:17.287: INFO: Got endpoints: latency-svc-vrb4n [685.282786ms] Mar 29 21:58:17.339: INFO: Created: latency-svc-4ckz5 Mar 29 21:58:17.367: INFO: Got endpoints: latency-svc-4ckz5 [730.431319ms] Mar 29 21:58:17.370: INFO: Created: latency-svc-nkkpq Mar 29 21:58:17.384: INFO: Got endpoints: latency-svc-nkkpq [705.650391ms] Mar 29 21:58:17.409: INFO: Created: latency-svc-d76jh Mar 29 21:58:17.427: INFO: Got endpoints: latency-svc-d76jh [676.256002ms] Mar 29 21:58:17.500: INFO: Created: latency-svc-vd2sk Mar 29 21:58:17.526: INFO: Created: latency-svc-xf24m Mar 29 21:58:17.526: INFO: Got endpoints: latency-svc-vd2sk [739.292418ms] Mar 29 21:58:17.535: INFO: Got endpoints: latency-svc-xf24m [711.333078ms] Mar 29 21:58:17.568: INFO: Created: latency-svc-gjz2h Mar 29 21:58:17.584: INFO: Got endpoints: latency-svc-gjz2h [675.768772ms] Mar 29 21:58:17.632: INFO: Created: latency-svc-z8hbp Mar 29 21:58:17.635: INFO: Got endpoints: latency-svc-z8hbp [716.049418ms] Mar 29 21:58:17.667: INFO: Created: latency-svc-jwgmf Mar 29 21:58:17.680: INFO: Got endpoints: latency-svc-jwgmf [723.530616ms] Mar 29 21:58:17.700: INFO: Created: latency-svc-crbsj Mar 29 21:58:17.716: INFO: Got endpoints: latency-svc-crbsj [723.830261ms] Mar 29 21:58:17.764: INFO: Created: latency-svc-qzp5k Mar 29 21:58:17.767: INFO: Got endpoints: latency-svc-qzp5k [696.974455ms] Mar 29 21:58:17.796: INFO: Created: latency-svc-m7rhp Mar 29 21:58:17.807: INFO: Got endpoints: latency-svc-m7rhp [682.654679ms] Mar 29 21:58:17.829: INFO: Created: latency-svc-d5w2s Mar 29 21:58:17.908: INFO: Got endpoints: latency-svc-d5w2s [701.020488ms] Mar 29 21:58:17.934: INFO: Created: latency-svc-ww74h Mar 29 21:58:17.951: INFO: Got endpoints: latency-svc-ww74h [736.137282ms] Mar 29 21:58:17.976: INFO: Created: latency-svc-qs58w Mar 29 21:58:17.988: INFO: Got endpoints: latency-svc-qs58w [735.903141ms] Mar 29 21:58:18.052: INFO: Created: latency-svc-cp4cf Mar 29 21:58:18.075: INFO: Got endpoints: latency-svc-cp4cf [787.6524ms] Mar 29 21:58:18.076: INFO: Created: latency-svc-brc5z Mar 29 21:58:18.102: INFO: Got endpoints: latency-svc-brc5z [735.312047ms] Mar 29 21:58:18.124: INFO: Created: latency-svc-b5nsb Mar 29 21:58:18.142: INFO: Got endpoints: latency-svc-b5nsb [757.756614ms] Mar 29 21:58:18.202: INFO: Created: latency-svc-7m7qs Mar 29 21:58:18.236: INFO: Got endpoints: latency-svc-7m7qs [809.136268ms] Mar 29 21:58:18.236: INFO: Created: latency-svc-7wfgn Mar 29 21:58:18.265: INFO: Got endpoints: latency-svc-7wfgn [738.613089ms] Mar 29 21:58:18.357: INFO: Created: latency-svc-jrvql Mar 29 21:58:18.366: INFO: Got endpoints: latency-svc-jrvql [831.498656ms] Mar 29 21:58:18.388: INFO: Created: latency-svc-2svwp Mar 29 21:58:18.397: INFO: Got endpoints: latency-svc-2svwp [813.280333ms] Mar 29 21:58:18.427: INFO: Created: latency-svc-dw5hn Mar 29 21:58:18.445: INFO: Got endpoints: latency-svc-dw5hn [809.533177ms] Mar 29 21:58:18.509: INFO: Created: latency-svc-jnp6t Mar 29 21:58:18.532: INFO: Got endpoints: latency-svc-jnp6t [851.980928ms] Mar 29 21:58:18.563: INFO: Created: latency-svc-425z6 Mar 29 21:58:18.572: INFO: Got endpoints: latency-svc-425z6 [855.374359ms] Mar 29 21:58:18.599: INFO: Created: latency-svc-d5vl4 Mar 29 21:58:18.644: INFO: Got endpoints: latency-svc-d5vl4 [876.916685ms] Mar 29 21:58:18.683: INFO: Created: latency-svc-rql26 Mar 29 21:58:18.692: INFO: Got endpoints: latency-svc-rql26 [884.588718ms] Mar 29 21:58:18.715: INFO: Created: latency-svc-qs7zw Mar 29 21:58:18.728: INFO: Got endpoints: latency-svc-qs7zw [820.005176ms] Mar 29 21:58:18.781: INFO: Created: latency-svc-2vdmx Mar 29 21:58:18.796: INFO: Got endpoints: latency-svc-2vdmx [844.546922ms] Mar 29 21:58:18.832: INFO: Created: latency-svc-kbwfr Mar 29 21:58:18.843: INFO: Got endpoints: latency-svc-kbwfr [855.40422ms] Mar 29 21:58:18.865: INFO: Created: latency-svc-n82sx Mar 29 21:58:18.879: INFO: Got endpoints: latency-svc-n82sx [804.010202ms] Mar 29 21:58:18.926: INFO: Created: latency-svc-fmczr Mar 29 21:58:18.930: INFO: Got endpoints: latency-svc-fmczr [827.455866ms] Mar 29 21:58:18.964: INFO: Created: latency-svc-8nmxf Mar 29 21:58:18.976: INFO: Got endpoints: latency-svc-8nmxf [833.311631ms] Mar 29 21:58:18.994: INFO: Created: latency-svc-prdn5 Mar 29 21:58:19.006: INFO: Got endpoints: latency-svc-prdn5 [770.226583ms] Mar 29 21:58:19.064: INFO: Created: latency-svc-5bd6t Mar 29 21:58:19.067: INFO: Got endpoints: latency-svc-5bd6t [801.455105ms] Mar 29 21:58:19.129: INFO: Created: latency-svc-c47cj Mar 29 21:58:19.159: INFO: Got endpoints: latency-svc-c47cj [792.379757ms] Mar 29 21:58:19.214: INFO: Created: latency-svc-7nbq5 Mar 29 21:58:19.217: INFO: Got endpoints: latency-svc-7nbq5 [819.840918ms] Mar 29 21:58:19.246: INFO: Created: latency-svc-vgr62 Mar 29 21:58:19.259: INFO: Got endpoints: latency-svc-vgr62 [813.95722ms] Mar 29 21:58:19.285: INFO: Created: latency-svc-z6cnk Mar 29 21:58:19.301: INFO: Got endpoints: latency-svc-z6cnk [769.327092ms] Mar 29 21:58:19.357: INFO: Created: latency-svc-wx7l8 Mar 29 21:58:19.384: INFO: Created: latency-svc-rc87b Mar 29 21:58:19.384: INFO: Got endpoints: latency-svc-wx7l8 [812.208581ms] Mar 29 21:58:19.397: INFO: Got endpoints: latency-svc-rc87b [753.148988ms] Mar 29 21:58:19.426: INFO: Created: latency-svc-zklc9 Mar 29 21:58:19.440: INFO: Got endpoints: latency-svc-zklc9 [748.047059ms] Mar 29 21:58:19.519: INFO: Created: latency-svc-mwnhz Mar 29 21:58:19.523: INFO: Got endpoints: latency-svc-mwnhz [795.29913ms] Mar 29 21:58:19.559: INFO: Created: latency-svc-rf42k Mar 29 21:58:19.572: INFO: Got endpoints: latency-svc-rf42k [776.350306ms] Mar 29 21:58:19.597: INFO: Created: latency-svc-dwhxt Mar 29 21:58:19.686: INFO: Got endpoints: latency-svc-dwhxt [843.405989ms] Mar 29 21:58:19.689: INFO: Created: latency-svc-v4m4b Mar 29 21:58:19.692: INFO: Got endpoints: latency-svc-v4m4b [812.94924ms] Mar 29 21:58:19.717: INFO: Created: latency-svc-wv5f4 Mar 29 21:58:19.729: INFO: Got endpoints: latency-svc-wv5f4 [799.592467ms] Mar 29 21:58:19.747: INFO: Created: latency-svc-v8gs6 Mar 29 21:58:19.760: INFO: Got endpoints: latency-svc-v8gs6 [784.021453ms] Mar 29 21:58:19.777: INFO: Created: latency-svc-czjgb Mar 29 21:58:19.812: INFO: Got endpoints: latency-svc-czjgb [805.539049ms] Mar 29 21:58:19.822: INFO: Created: latency-svc-kqbcv Mar 29 21:58:19.838: INFO: Got endpoints: latency-svc-kqbcv [771.609503ms] Mar 29 21:58:19.864: INFO: Created: latency-svc-mplqd Mar 29 21:58:19.875: INFO: Got endpoints: latency-svc-mplqd [715.733691ms] Mar 29 21:58:19.897: INFO: Created: latency-svc-6fzz9 Mar 29 21:58:19.911: INFO: Got endpoints: latency-svc-6fzz9 [693.788469ms] Mar 29 21:58:19.944: INFO: Created: latency-svc-8nj5h Mar 29 21:58:19.953: INFO: Got endpoints: latency-svc-8nj5h [694.107555ms] Mar 29 21:58:19.975: INFO: Created: latency-svc-6z4wd Mar 29 21:58:19.989: INFO: Got endpoints: latency-svc-6z4wd [687.998049ms] Mar 29 21:58:20.014: INFO: Created: latency-svc-mfw6j Mar 29 21:58:20.026: INFO: Got endpoints: latency-svc-mfw6j [641.853482ms] Mar 29 21:58:20.082: INFO: Created: latency-svc-wzxs9 Mar 29 21:58:20.085: INFO: Got endpoints: latency-svc-wzxs9 [687.034868ms] Mar 29 21:58:20.113: INFO: Created: latency-svc-kkwfp Mar 29 21:58:20.128: INFO: Got endpoints: latency-svc-kkwfp [687.903019ms] Mar 29 21:58:20.219: INFO: Created: latency-svc-5ttbd Mar 29 21:58:20.223: INFO: Got endpoints: latency-svc-5ttbd [699.469374ms] Mar 29 21:58:20.266: INFO: Created: latency-svc-snkgr Mar 29 21:58:20.279: INFO: Got endpoints: latency-svc-snkgr [706.205045ms] Mar 29 21:58:20.302: INFO: Created: latency-svc-zmr6p Mar 29 21:58:20.315: INFO: Got endpoints: latency-svc-zmr6p [628.011238ms] Mar 29 21:58:20.365: INFO: Created: latency-svc-2tcqd Mar 29 21:58:20.374: INFO: Got endpoints: latency-svc-2tcqd [682.162684ms] Mar 29 21:58:20.404: INFO: Created: latency-svc-t78sg Mar 29 21:58:20.417: INFO: Got endpoints: latency-svc-t78sg [687.649963ms] Mar 29 21:58:20.440: INFO: Created: latency-svc-l5wv5 Mar 29 21:58:20.453: INFO: Got endpoints: latency-svc-l5wv5 [693.734205ms] Mar 29 21:58:20.513: INFO: Created: latency-svc-vr5k7 Mar 29 21:58:20.516: INFO: Got endpoints: latency-svc-vr5k7 [704.585143ms] Mar 29 21:58:20.545: INFO: Created: latency-svc-nz6t4 Mar 29 21:58:20.563: INFO: Got endpoints: latency-svc-nz6t4 [724.19966ms] Mar 29 21:58:20.582: INFO: Created: latency-svc-m9zl8 Mar 29 21:58:20.599: INFO: Got endpoints: latency-svc-m9zl8 [724.159118ms] Mar 29 21:58:20.644: INFO: Created: latency-svc-hrz67 Mar 29 21:58:20.648: INFO: Got endpoints: latency-svc-hrz67 [736.874138ms] Mar 29 21:58:20.680: INFO: Created: latency-svc-4mxsr Mar 29 21:58:20.688: INFO: Got endpoints: latency-svc-4mxsr [735.171071ms] Mar 29 21:58:20.743: INFO: Created: latency-svc-zwr65 Mar 29 21:58:20.799: INFO: Got endpoints: latency-svc-zwr65 [810.182319ms] Mar 29 21:58:20.801: INFO: Created: latency-svc-qh2qd Mar 29 21:58:20.809: INFO: Got endpoints: latency-svc-qh2qd [783.18735ms] Mar 29 21:58:20.831: INFO: Created: latency-svc-zm9lh Mar 29 21:58:20.845: INFO: Got endpoints: latency-svc-zm9lh [760.660831ms] Mar 29 21:58:20.884: INFO: Created: latency-svc-44zq2 Mar 29 21:58:20.894: INFO: Got endpoints: latency-svc-44zq2 [766.101582ms] Mar 29 21:58:20.938: INFO: Created: latency-svc-8qfp8 Mar 29 21:58:20.972: INFO: Created: latency-svc-45r4z Mar 29 21:58:20.972: INFO: Got endpoints: latency-svc-8qfp8 [749.179153ms] Mar 29 21:58:20.984: INFO: Got endpoints: latency-svc-45r4z [705.653024ms] Mar 29 21:58:21.004: INFO: Created: latency-svc-t6v9h Mar 29 21:58:21.021: INFO: Got endpoints: latency-svc-t6v9h [706.202103ms] Mar 29 21:58:21.071: INFO: Created: latency-svc-dn728 Mar 29 21:58:21.081: INFO: Got endpoints: latency-svc-dn728 [705.980385ms] Mar 29 21:58:21.121: INFO: Created: latency-svc-7ntff Mar 29 21:58:21.141: INFO: Got endpoints: latency-svc-7ntff [723.812837ms] Mar 29 21:58:21.201: INFO: Created: latency-svc-7lk22 Mar 29 21:58:21.204: INFO: Got endpoints: latency-svc-7lk22 [750.692521ms] Mar 29 21:58:21.232: INFO: Created: latency-svc-xmw4w Mar 29 21:58:21.243: INFO: Got endpoints: latency-svc-xmw4w [727.020096ms] Mar 29 21:58:21.268: INFO: Created: latency-svc-ljhn6 Mar 29 21:58:21.279: INFO: Got endpoints: latency-svc-ljhn6 [716.654663ms] Mar 29 21:58:21.347: INFO: Created: latency-svc-8m5gh Mar 29 21:58:21.351: INFO: Got endpoints: latency-svc-8m5gh [751.700899ms] Mar 29 21:58:21.385: INFO: Created: latency-svc-cp5xv Mar 29 21:58:21.398: INFO: Got endpoints: latency-svc-cp5xv [750.439948ms] Mar 29 21:58:21.417: INFO: Created: latency-svc-jhzrt Mar 29 21:58:21.434: INFO: Got endpoints: latency-svc-jhzrt [745.816284ms] Mar 29 21:58:21.483: INFO: Created: latency-svc-gx9fc Mar 29 21:58:21.486: INFO: Got endpoints: latency-svc-gx9fc [686.303215ms] Mar 29 21:58:21.520: INFO: Created: latency-svc-qtjbv Mar 29 21:58:21.537: INFO: Got endpoints: latency-svc-qtjbv [727.746296ms] Mar 29 21:58:21.560: INFO: Created: latency-svc-wqml6 Mar 29 21:58:21.620: INFO: Got endpoints: latency-svc-wqml6 [774.645782ms] Mar 29 21:58:21.634: INFO: Created: latency-svc-ptcjb Mar 29 21:58:21.651: INFO: Got endpoints: latency-svc-ptcjb [757.200003ms] Mar 29 21:58:21.700: INFO: Created: latency-svc-mp76r Mar 29 21:58:21.718: INFO: Got endpoints: latency-svc-mp76r [745.483459ms] Mar 29 21:58:21.794: INFO: Created: latency-svc-g942z Mar 29 21:58:21.796: INFO: Got endpoints: latency-svc-g942z [811.943032ms] Mar 29 21:58:21.823: INFO: Created: latency-svc-hhbgb Mar 29 21:58:21.832: INFO: Got endpoints: latency-svc-hhbgb [811.186552ms] Mar 29 21:58:21.853: INFO: Created: latency-svc-mxgwn Mar 29 21:58:21.862: INFO: Got endpoints: latency-svc-mxgwn [781.350428ms] Mar 29 21:58:21.885: INFO: Created: latency-svc-qjh5s Mar 29 21:58:21.949: INFO: Got endpoints: latency-svc-qjh5s [808.318873ms] Mar 29 21:58:21.952: INFO: Created: latency-svc-rtznl Mar 29 21:58:21.958: INFO: Got endpoints: latency-svc-rtznl [754.157924ms] Mar 29 21:58:21.980: INFO: Created: latency-svc-fzr96 Mar 29 21:58:21.995: INFO: Got endpoints: latency-svc-fzr96 [751.789925ms] Mar 29 21:58:22.039: INFO: Created: latency-svc-rllsn Mar 29 21:58:22.117: INFO: Got endpoints: latency-svc-rllsn [837.943253ms] Mar 29 21:58:22.119: INFO: Created: latency-svc-qqlck Mar 29 21:58:22.127: INFO: Got endpoints: latency-svc-qqlck [776.659863ms] Mar 29 21:58:22.150: INFO: Created: latency-svc-c7wtb Mar 29 21:58:22.164: INFO: Got endpoints: latency-svc-c7wtb [765.946152ms] Mar 29 21:58:22.183: INFO: Created: latency-svc-g8ksw Mar 29 21:58:22.200: INFO: Got endpoints: latency-svc-g8ksw [765.882001ms] Mar 29 21:58:22.273: INFO: Created: latency-svc-ndx2g Mar 29 21:58:22.276: INFO: Got endpoints: latency-svc-ndx2g [789.731368ms] Mar 29 21:58:22.324: INFO: Created: latency-svc-z8cfw Mar 29 21:58:22.339: INFO: Got endpoints: latency-svc-z8cfw [801.942126ms] Mar 29 21:58:22.371: INFO: Created: latency-svc-vr6nj Mar 29 21:58:22.405: INFO: Got endpoints: latency-svc-vr6nj [785.280181ms] Mar 29 21:58:22.422: INFO: Created: latency-svc-7rwwz Mar 29 21:58:22.441: INFO: Got endpoints: latency-svc-7rwwz [790.13301ms] Mar 29 21:58:22.465: INFO: Created: latency-svc-p86gc Mar 29 21:58:22.477: INFO: Got endpoints: latency-svc-p86gc [759.519313ms] Mar 29 21:58:22.567: INFO: Created: latency-svc-8v7pm Mar 29 21:58:22.570: INFO: Got endpoints: latency-svc-8v7pm [773.479548ms] Mar 29 21:58:22.599: INFO: Created: latency-svc-zz2c4 Mar 29 21:58:22.616: INFO: Got endpoints: latency-svc-zz2c4 [783.590751ms] Mar 29 21:58:22.632: INFO: Created: latency-svc-dtnf4 Mar 29 21:58:22.646: INFO: Got endpoints: latency-svc-dtnf4 [784.168895ms] Mar 29 21:58:22.710: INFO: Created: latency-svc-klhdc Mar 29 21:58:22.713: INFO: Got endpoints: latency-svc-klhdc [763.714335ms] Mar 29 21:58:22.738: INFO: Created: latency-svc-ts6w5 Mar 29 21:58:22.767: INFO: Got endpoints: latency-svc-ts6w5 [808.071201ms] Mar 29 21:58:22.792: INFO: Created: latency-svc-c487t Mar 29 21:58:22.848: INFO: Got endpoints: latency-svc-c487t [852.921132ms] Mar 29 21:58:22.854: INFO: Created: latency-svc-slkht Mar 29 21:58:22.888: INFO: Got endpoints: latency-svc-slkht [770.367941ms] Mar 29 21:58:22.915: INFO: Created: latency-svc-796wl Mar 29 21:58:22.941: INFO: Got endpoints: latency-svc-796wl [814.082551ms] Mar 29 21:58:23.010: INFO: Created: latency-svc-gx8gh Mar 29 21:58:23.012: INFO: Got endpoints: latency-svc-gx8gh [847.950236ms] Mar 29 21:58:23.071: INFO: Created: latency-svc-55mk8 Mar 29 21:58:23.183: INFO: Got endpoints: latency-svc-55mk8 [983.287172ms] Mar 29 21:58:23.186: INFO: Created: latency-svc-x5br9 Mar 29 21:58:23.204: INFO: Got endpoints: latency-svc-x5br9 [927.901869ms] Mar 29 21:58:23.227: INFO: Created: latency-svc-ztl6d Mar 29 21:58:23.240: INFO: Got endpoints: latency-svc-ztl6d [901.365236ms] Mar 29 21:58:23.257: INFO: Created: latency-svc-8p4b4 Mar 29 21:58:23.270: INFO: Got endpoints: latency-svc-8p4b4 [864.833551ms] Mar 29 21:58:23.333: INFO: Created: latency-svc-f957w Mar 29 21:58:23.335: INFO: Got endpoints: latency-svc-f957w [893.767937ms] Mar 29 21:58:23.368: INFO: Created: latency-svc-cc7ml Mar 29 21:58:23.379: INFO: Got endpoints: latency-svc-cc7ml [901.693159ms] Mar 29 21:58:23.411: INFO: Created: latency-svc-xvcjv Mar 29 21:58:23.425: INFO: Got endpoints: latency-svc-xvcjv [854.73698ms] Mar 29 21:58:23.489: INFO: Created: latency-svc-4674m Mar 29 21:58:23.493: INFO: Got endpoints: latency-svc-4674m [877.790287ms] Mar 29 21:58:23.512: INFO: Created: latency-svc-bf9bz Mar 29 21:58:23.524: INFO: Got endpoints: latency-svc-bf9bz [877.334115ms] Mar 29 21:58:23.542: INFO: Created: latency-svc-29tch Mar 29 21:58:23.566: INFO: Got endpoints: latency-svc-29tch [853.185378ms] Mar 29 21:58:23.626: INFO: Created: latency-svc-grpdq Mar 29 21:58:23.633: INFO: Got endpoints: latency-svc-grpdq [866.216606ms] Mar 29 21:58:23.671: INFO: Created: latency-svc-wkcrz Mar 29 21:58:23.692: INFO: Got endpoints: latency-svc-wkcrz [844.116065ms] Mar 29 21:58:23.710: INFO: Created: latency-svc-xlb65 Mar 29 21:58:23.758: INFO: Got endpoints: latency-svc-xlb65 [869.993288ms] Mar 29 21:58:23.769: INFO: Created: latency-svc-vqzt8 Mar 29 21:58:23.783: INFO: Got endpoints: latency-svc-vqzt8 [841.395158ms] Mar 29 21:58:23.802: INFO: Created: latency-svc-2mm4s Mar 29 21:58:23.819: INFO: Got endpoints: latency-svc-2mm4s [807.21148ms] Mar 29 21:58:23.839: INFO: Created: latency-svc-g4pf6 Mar 29 21:58:23.850: INFO: Got endpoints: latency-svc-g4pf6 [666.13398ms] Mar 29 21:58:23.902: INFO: Created: latency-svc-rxl8d Mar 29 21:58:23.920: INFO: Created: latency-svc-sk4zm Mar 29 21:58:23.921: INFO: Got endpoints: latency-svc-rxl8d [717.4952ms] Mar 29 21:58:23.934: INFO: Got endpoints: latency-svc-sk4zm [694.073248ms] Mar 29 21:58:23.968: INFO: Created: latency-svc-7vxwq Mar 29 21:58:23.982: INFO: Got endpoints: latency-svc-7vxwq [712.201936ms] Mar 29 21:58:24.064: INFO: Created: latency-svc-vkg9x Mar 29 21:58:24.066: INFO: Got endpoints: latency-svc-vkg9x [730.876253ms] Mar 29 21:58:24.100: INFO: Created: latency-svc-hgdtv Mar 29 21:58:24.115: INFO: Got endpoints: latency-svc-hgdtv [735.905163ms] Mar 29 21:58:24.136: INFO: Created: latency-svc-pc6qt Mar 29 21:58:24.151: INFO: Got endpoints: latency-svc-pc6qt [726.795471ms] Mar 29 21:58:24.237: INFO: Created: latency-svc-brj2h Mar 29 21:58:24.277: INFO: Got endpoints: latency-svc-brj2h [783.854218ms] Mar 29 21:58:24.278: INFO: Created: latency-svc-79cpx Mar 29 21:58:24.319: INFO: Got endpoints: latency-svc-79cpx [795.5947ms] Mar 29 21:58:24.399: INFO: Created: latency-svc-566mk Mar 29 21:58:24.402: INFO: Got endpoints: latency-svc-566mk [835.724175ms] Mar 29 21:58:24.454: INFO: Created: latency-svc-tf96q Mar 29 21:58:24.470: INFO: Got endpoints: latency-svc-tf96q [837.518425ms] Mar 29 21:58:24.487: INFO: Created: latency-svc-5nmh6 Mar 29 21:58:24.536: INFO: Got endpoints: latency-svc-5nmh6 [843.942505ms] Mar 29 21:58:24.562: INFO: Created: latency-svc-n798f Mar 29 21:58:24.573: INFO: Got endpoints: latency-svc-n798f [814.810744ms] Mar 29 21:58:24.598: INFO: Created: latency-svc-hhlgj Mar 29 21:58:24.615: INFO: Got endpoints: latency-svc-hhlgj [831.751324ms] Mar 29 21:58:24.630: INFO: Created: latency-svc-zq74f Mar 29 21:58:24.663: INFO: Got endpoints: latency-svc-zq74f [843.231669ms] Mar 29 21:58:24.673: INFO: Created: latency-svc-wmlbc Mar 29 21:58:24.688: INFO: Got endpoints: latency-svc-wmlbc [838.037543ms] Mar 29 21:58:24.715: INFO: Created: latency-svc-fjpnn Mar 29 21:58:24.730: INFO: Got endpoints: latency-svc-fjpnn [809.207187ms] Mar 29 21:58:24.747: INFO: Created: latency-svc-84pb8 Mar 29 21:58:24.760: INFO: Got endpoints: latency-svc-84pb8 [825.735929ms] Mar 29 21:58:24.806: INFO: Created: latency-svc-v5rrr Mar 29 21:58:24.828: INFO: Got endpoints: latency-svc-v5rrr [845.996803ms] Mar 29 21:58:24.859: INFO: Created: latency-svc-z99jg Mar 29 21:58:24.869: INFO: Got endpoints: latency-svc-z99jg [802.472414ms] Mar 29 21:58:24.889: INFO: Created: latency-svc-2z9r7 Mar 29 21:58:24.899: INFO: Got endpoints: latency-svc-2z9r7 [783.886721ms] Mar 29 21:58:24.944: INFO: Created: latency-svc-dgdhw Mar 29 21:58:24.947: INFO: Got endpoints: latency-svc-dgdhw [795.959552ms] Mar 29 21:58:24.970: INFO: Created: latency-svc-72qj9 Mar 29 21:58:24.984: INFO: Got endpoints: latency-svc-72qj9 [706.227837ms] Mar 29 21:58:24.999: INFO: Created: latency-svc-6mtqw Mar 29 21:58:25.038: INFO: Got endpoints: latency-svc-6mtqw [718.993269ms] Mar 29 21:58:25.099: INFO: Created: latency-svc-b5frc Mar 29 21:58:25.104: INFO: Got endpoints: latency-svc-b5frc [701.978018ms] Mar 29 21:58:25.126: INFO: Created: latency-svc-h7ktv Mar 29 21:58:25.141: INFO: Got endpoints: latency-svc-h7ktv [670.551668ms] Mar 29 21:58:25.192: INFO: Created: latency-svc-6w848 Mar 29 21:58:25.231: INFO: Got endpoints: latency-svc-6w848 [694.446939ms] Mar 29 21:58:25.248: INFO: Created: latency-svc-9v4dh Mar 29 21:58:25.261: INFO: Got endpoints: latency-svc-9v4dh [688.458932ms] Mar 29 21:58:25.278: INFO: Created: latency-svc-sjn44 Mar 29 21:58:25.292: INFO: Got endpoints: latency-svc-sjn44 [676.716148ms] Mar 29 21:58:25.321: INFO: Created: latency-svc-5g874 Mar 29 21:58:25.356: INFO: Got endpoints: latency-svc-5g874 [693.620788ms] Mar 29 21:58:25.372: INFO: Created: latency-svc-9mn74 Mar 29 21:58:25.402: INFO: Got endpoints: latency-svc-9mn74 [713.757799ms] Mar 29 21:58:25.432: INFO: Created: latency-svc-gxkmk Mar 29 21:58:25.448: INFO: Got endpoints: latency-svc-gxkmk [717.576767ms] Mar 29 21:58:25.498: INFO: Created: latency-svc-gcvv4 Mar 29 21:58:25.502: INFO: Got endpoints: latency-svc-gcvv4 [742.165154ms] Mar 29 21:58:25.519: INFO: Created: latency-svc-z9l4n Mar 29 21:58:25.533: INFO: Got endpoints: latency-svc-z9l4n [704.045876ms] Mar 29 21:58:25.548: INFO: Created: latency-svc-f6xdc Mar 29 21:58:25.557: INFO: Got endpoints: latency-svc-f6xdc [688.386816ms] Mar 29 21:58:25.575: INFO: Created: latency-svc-wwc7j Mar 29 21:58:25.588: INFO: Got endpoints: latency-svc-wwc7j [689.242223ms] Mar 29 21:58:25.626: INFO: Created: latency-svc-gq45d Mar 29 21:58:25.648: INFO: Got endpoints: latency-svc-gq45d [700.716356ms] Mar 29 21:58:25.675: INFO: Created: latency-svc-7cxp8 Mar 29 21:58:25.690: INFO: Got endpoints: latency-svc-7cxp8 [706.452967ms] Mar 29 21:58:25.710: INFO: Created: latency-svc-h6vdq Mar 29 21:58:25.720: INFO: Got endpoints: latency-svc-h6vdq [682.197109ms] Mar 29 21:58:25.770: INFO: Created: latency-svc-w8cxr Mar 29 21:58:25.775: INFO: Got endpoints: latency-svc-w8cxr [670.364534ms] Mar 29 21:58:25.798: INFO: Created: latency-svc-m9pzj Mar 29 21:58:25.810: INFO: Got endpoints: latency-svc-m9pzj [669.454411ms] Mar 29 21:58:25.828: INFO: Created: latency-svc-n4d5n Mar 29 21:58:25.841: INFO: Got endpoints: latency-svc-n4d5n [610.256118ms] Mar 29 21:58:25.859: INFO: Created: latency-svc-2bq5p Mar 29 21:58:25.926: INFO: Got endpoints: latency-svc-2bq5p [664.893463ms] Mar 29 21:58:25.929: INFO: Created: latency-svc-rtwgb Mar 29 21:58:25.944: INFO: Got endpoints: latency-svc-rtwgb [652.937835ms] Mar 29 21:58:25.972: INFO: Created: latency-svc-gfgwc Mar 29 21:58:25.986: INFO: Got endpoints: latency-svc-gfgwc [629.271246ms] Mar 29 21:58:26.002: INFO: Created: latency-svc-6zfkr Mar 29 21:58:26.016: INFO: Got endpoints: latency-svc-6zfkr [614.313713ms] Mar 29 21:58:26.057: INFO: Created: latency-svc-4cftp Mar 29 21:58:26.060: INFO: Got endpoints: latency-svc-4cftp [612.406485ms] Mar 29 21:58:26.101: INFO: Created: latency-svc-qwrh9 Mar 29 21:58:26.112: INFO: Got endpoints: latency-svc-qwrh9 [610.20825ms] Mar 29 21:58:26.146: INFO: Created: latency-svc-c6tg7 Mar 29 21:58:26.189: INFO: Got endpoints: latency-svc-c6tg7 [656.744796ms] Mar 29 21:58:26.200: INFO: Created: latency-svc-9xr5g Mar 29 21:58:26.209: INFO: Got endpoints: latency-svc-9xr5g [652.092179ms] Mar 29 21:58:26.230: INFO: Created: latency-svc-4pgrd Mar 29 21:58:26.251: INFO: Got endpoints: latency-svc-4pgrd [662.392943ms] Mar 29 21:58:26.277: INFO: Created: latency-svc-ph794 Mar 29 21:58:26.288: INFO: Got endpoints: latency-svc-ph794 [639.407022ms] Mar 29 21:58:26.333: INFO: Created: latency-svc-wjxqn Mar 29 21:58:26.336: INFO: Got endpoints: latency-svc-wjxqn [645.701278ms] Mar 29 21:58:26.336: INFO: Latencies: [52.155367ms 82.01274ms 147.691298ms 160.204899ms 190.407168ms 267.58645ms 301.610729ms 344.07748ms 415.327571ms 452.317178ms 488.718065ms 572.856121ms 584.41137ms 610.20825ms 610.256118ms 612.406485ms 614.313713ms 621.083143ms 628.011238ms 629.271246ms 639.407022ms 641.853482ms 645.701278ms 652.092179ms 652.937835ms 656.744796ms 657.258409ms 662.392943ms 664.893463ms 666.13398ms 669.454411ms 670.364534ms 670.551668ms 675.768772ms 676.256002ms 676.716148ms 682.162684ms 682.197109ms 682.654679ms 683.576221ms 685.282786ms 686.303215ms 687.034868ms 687.649963ms 687.903019ms 687.998049ms 688.386816ms 688.458932ms 689.242223ms 693.620788ms 693.734205ms 693.788469ms 694.073248ms 694.107555ms 694.446939ms 696.974455ms 699.469374ms 700.716356ms 701.020488ms 701.978018ms 704.045876ms 704.585143ms 705.650391ms 705.653024ms 705.980385ms 706.202103ms 706.205045ms 706.227837ms 706.452967ms 707.769003ms 711.333078ms 712.201936ms 713.757799ms 715.733691ms 716.049418ms 716.654663ms 717.4952ms 717.576767ms 718.993269ms 720.281729ms 723.530616ms 723.812837ms 723.830261ms 724.159118ms 724.19966ms 724.435125ms 726.347268ms 726.795471ms 727.020096ms 727.746296ms 730.431319ms 730.876253ms 735.171071ms 735.312047ms 735.903141ms 735.905163ms 736.137282ms 736.874138ms 738.613089ms 739.292418ms 742.165154ms 745.483459ms 745.816284ms 748.047059ms 749.179153ms 750.439948ms 750.692521ms 751.700899ms 751.789925ms 753.148988ms 754.157924ms 757.200003ms 757.756614ms 759.519313ms 760.660831ms 763.714335ms 765.882001ms 765.946152ms 766.101582ms 769.327092ms 770.226583ms 770.367941ms 771.609503ms 773.479548ms 774.645782ms 776.350306ms 776.659863ms 781.350428ms 783.18735ms 783.590751ms 783.854218ms 783.886721ms 784.021453ms 784.168895ms 785.280181ms 787.6524ms 789.731368ms 790.13301ms 792.379757ms 795.29913ms 795.5947ms 795.959552ms 799.592467ms 801.455105ms 801.942126ms 802.472414ms 804.010202ms 805.539049ms 807.21148ms 808.071201ms 808.318873ms 809.136268ms 809.207187ms 809.533177ms 810.182319ms 811.186552ms 811.943032ms 812.208581ms 812.94924ms 813.280333ms 813.95722ms 814.082551ms 814.810744ms 819.840918ms 820.005176ms 825.735929ms 827.455866ms 831.498656ms 831.751324ms 833.311631ms 835.724175ms 837.518425ms 837.943253ms 838.037543ms 841.395158ms 843.231669ms 843.405989ms 843.942505ms 844.116065ms 844.546922ms 845.996803ms 847.950236ms 851.980928ms 852.921132ms 853.185378ms 854.73698ms 855.374359ms 855.40422ms 864.833551ms 866.216606ms 869.993288ms 876.916685ms 877.334115ms 877.790287ms 884.588718ms 893.767937ms 901.365236ms 901.693159ms 927.901869ms 983.287172ms] Mar 29 21:58:26.336: INFO: 50 %ile: 742.165154ms Mar 29 21:58:26.336: INFO: 90 %ile: 845.996803ms Mar 29 21:58:26.336: INFO: 99 %ile: 927.901869ms Mar 29 21:58:26.336: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:58:26.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8493" for this suite. • [SLOW TEST:13.364 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":230,"skipped":3856,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:58:26.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 21:58:26.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6cd085a-ef3c-431c-8ba4-8cfccf2776e3" in namespace "downward-api-7097" to be "success or failure" Mar 29 21:58:26.502: INFO: Pod "downwardapi-volume-a6cd085a-ef3c-431c-8ba4-8cfccf2776e3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.799012ms Mar 29 21:58:28.506: INFO: Pod "downwardapi-volume-a6cd085a-ef3c-431c-8ba4-8cfccf2776e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018895359s Mar 29 21:58:30.511: INFO: Pod "downwardapi-volume-a6cd085a-ef3c-431c-8ba4-8cfccf2776e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022986978s STEP: Saw pod success Mar 29 21:58:30.511: INFO: Pod "downwardapi-volume-a6cd085a-ef3c-431c-8ba4-8cfccf2776e3" satisfied condition "success or failure" Mar 29 21:58:30.514: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a6cd085a-ef3c-431c-8ba4-8cfccf2776e3 container client-container: STEP: delete the pod Mar 29 21:58:30.592: INFO: Waiting for pod downwardapi-volume-a6cd085a-ef3c-431c-8ba4-8cfccf2776e3 to disappear Mar 29 21:58:30.594: INFO: Pod downwardapi-volume-a6cd085a-ef3c-431c-8ba4-8cfccf2776e3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:58:30.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7097" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:58:30.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-5031 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5031 STEP: Deleting pre-stop pod Mar 29 21:58:43.767: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:58:43.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5031" for this suite. • [SLOW TEST:13.254 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":232,"skipped":3925,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:58:43.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:58:49.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6585" for this suite. • [SLOW TEST:5.558 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":233,"skipped":3927,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:58:49.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-c304d552-5d4f-44d3-942e-6e96a5ba5217 STEP: Creating a pod to test consume configMaps Mar 29 21:58:49.488: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-13a941ef-ed96-43f4-a1d9-fbb9903d46e0" in namespace "projected-4709" to be "success or failure" Mar 29 21:58:49.492: INFO: Pod "pod-projected-configmaps-13a941ef-ed96-43f4-a1d9-fbb9903d46e0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.907156ms Mar 29 21:58:51.496: INFO: Pod "pod-projected-configmaps-13a941ef-ed96-43f4-a1d9-fbb9903d46e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007770858s Mar 29 21:58:53.500: INFO: Pod "pod-projected-configmaps-13a941ef-ed96-43f4-a1d9-fbb9903d46e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011914189s STEP: Saw pod success Mar 29 21:58:53.500: INFO: Pod "pod-projected-configmaps-13a941ef-ed96-43f4-a1d9-fbb9903d46e0" satisfied condition "success or failure" Mar 29 21:58:53.502: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-13a941ef-ed96-43f4-a1d9-fbb9903d46e0 container projected-configmap-volume-test: STEP: delete the pod Mar 29 21:58:53.538: INFO: Waiting for pod pod-projected-configmaps-13a941ef-ed96-43f4-a1d9-fbb9903d46e0 to disappear Mar 29 21:58:53.552: INFO: Pod pod-projected-configmaps-13a941ef-ed96-43f4-a1d9-fbb9903d46e0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 21:58:53.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4709" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3939,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 21:58:53.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8419 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-8419 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8419 Mar 29 21:58:53.636: INFO: Found 0 stateful pods, waiting for 1 Mar 29 21:59:03.640: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 29 21:59:03.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 29 21:59:04.044: INFO: stderr: "I0329 21:59:03.906273 2569 log.go:172] (0xc000a18000) (0xc0007a2b40) Create stream\nI0329 21:59:03.906331 2569 log.go:172] (0xc000a18000) (0xc0007a2b40) Stream added, broadcasting: 1\nI0329 21:59:03.909668 2569 log.go:172] (0xc000a18000) Reply frame received for 1\nI0329 21:59:03.909713 2569 log.go:172] (0xc000a18000) (0xc000647a40) Create stream\nI0329 21:59:03.909733 2569 log.go:172] (0xc000a18000) (0xc000647a40) Stream added, broadcasting: 3\nI0329 21:59:03.910792 2569 log.go:172] (0xc000a18000) Reply frame received for 3\nI0329 21:59:03.910833 2569 log.go:172] (0xc000a18000) (0xc0008be140) Create stream\nI0329 21:59:03.910852 2569 log.go:172] (0xc000a18000) (0xc0008be140) Stream added, broadcasting: 5\nI0329 21:59:03.911820 2569 log.go:172] (0xc000a18000) Reply frame received for 5\nI0329 21:59:04.013268 2569 log.go:172] (0xc000a18000) Data frame received for 5\nI0329 21:59:04.013298 2569 log.go:172] (0xc0008be140) (5) Data frame handling\nI0329 21:59:04.013311 2569 log.go:172] (0xc0008be140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0329 21:59:04.037003 2569 log.go:172] (0xc000a18000) Data frame received for 3\nI0329 21:59:04.037043 2569 log.go:172] (0xc000647a40) (3) Data frame handling\nI0329 21:59:04.037078 2569 log.go:172] (0xc000647a40) (3) Data frame sent\nI0329 21:59:04.037125 2569 log.go:172] (0xc000a18000) Data frame received for 3\nI0329 21:59:04.037154 2569 log.go:172] (0xc000647a40) (3) Data frame handling\nI0329 21:59:04.037692 2569 log.go:172] (0xc000a18000) Data frame received for 5\nI0329 21:59:04.037719 2569 log.go:172] (0xc0008be140) (5) Data frame handling\nI0329 21:59:04.039136 2569 log.go:172] (0xc000a18000) Data frame received for 1\nI0329 21:59:04.039182 2569 log.go:172] (0xc0007a2b40) (1) Data frame handling\nI0329 21:59:04.039214 2569 log.go:172] (0xc0007a2b40) (1) Data frame sent\nI0329 21:59:04.039239 2569 log.go:172] (0xc000a18000) (0xc0007a2b40) Stream removed, broadcasting: 1\nI0329 21:59:04.039258 2569 log.go:172] (0xc000a18000) Go away received\nI0329 21:59:04.039575 2569 log.go:172] (0xc000a18000) (0xc0007a2b40) Stream removed, broadcasting: 1\nI0329 21:59:04.039606 2569 log.go:172] (0xc000a18000) (0xc000647a40) Stream removed, broadcasting: 3\nI0329 21:59:04.039627 2569 log.go:172] (0xc000a18000) (0xc0008be140) Stream removed, broadcasting: 5\n" Mar 29 21:59:04.044: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 29 21:59:04.044: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 29 21:59:04.048: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 29 21:59:14.052: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 29 21:59:14.052: INFO: Waiting for statefulset status.replicas updated to 0 Mar 29 21:59:14.106: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:14.106: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:14.106: INFO: Mar 29 21:59:14.106: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 29 21:59:15.111: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.956062749s Mar 29 21:59:16.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.95141457s Mar 29 21:59:17.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.948146165s Mar 29 21:59:18.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.934438152s Mar 29 21:59:19.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.912588954s Mar 29 21:59:20.159: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.908202398s Mar 29 21:59:21.164: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.902941197s Mar 29 21:59:22.170: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.897823157s Mar 29 21:59:23.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 892.467379ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8419 Mar 29 21:59:24.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 21:59:24.407: INFO: stderr: "I0329 21:59:24.302955 2592 log.go:172] (0xc00090d130) (0xc0009c8460) Create stream\nI0329 21:59:24.302993 2592 log.go:172] (0xc00090d130) (0xc0009c8460) Stream added, broadcasting: 1\nI0329 21:59:24.307291 2592 log.go:172] (0xc00090d130) Reply frame received for 1\nI0329 21:59:24.307320 2592 log.go:172] (0xc00090d130) (0xc0001d3360) Create stream\nI0329 21:59:24.307328 2592 log.go:172] (0xc00090d130) (0xc0001d3360) Stream added, broadcasting: 3\nI0329 21:59:24.308261 2592 log.go:172] (0xc00090d130) Reply frame received for 3\nI0329 21:59:24.308293 2592 log.go:172] (0xc00090d130) (0xc0009c8000) Create stream\nI0329 21:59:24.308306 2592 log.go:172] (0xc00090d130) (0xc0009c8000) Stream added, broadcasting: 5\nI0329 21:59:24.309565 2592 log.go:172] (0xc00090d130) Reply frame received for 5\nI0329 21:59:24.401493 2592 log.go:172] (0xc00090d130) Data frame received for 5\nI0329 21:59:24.401539 2592 log.go:172] (0xc0009c8000) (5) Data frame handling\nI0329 21:59:24.401552 2592 log.go:172] (0xc0009c8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0329 21:59:24.401600 2592 log.go:172] (0xc00090d130) Data frame received for 3\nI0329 21:59:24.401636 2592 log.go:172] (0xc0001d3360) (3) Data frame handling\nI0329 21:59:24.401665 2592 log.go:172] (0xc0001d3360) (3) Data frame sent\nI0329 21:59:24.401716 2592 log.go:172] (0xc00090d130) Data frame received for 3\nI0329 21:59:24.401738 2592 log.go:172] (0xc0001d3360) (3) Data frame handling\nI0329 21:59:24.401772 2592 log.go:172] (0xc00090d130) Data frame received for 5\nI0329 21:59:24.401795 2592 log.go:172] (0xc0009c8000) (5) Data frame handling\nI0329 21:59:24.403533 2592 log.go:172] (0xc00090d130) Data frame received for 1\nI0329 21:59:24.403565 2592 log.go:172] (0xc0009c8460) (1) Data frame handling\nI0329 21:59:24.403579 2592 log.go:172] (0xc0009c8460) (1) Data frame sent\nI0329 21:59:24.403589 2592 log.go:172] (0xc00090d130) (0xc0009c8460) Stream removed, broadcasting: 1\nI0329 21:59:24.403643 2592 log.go:172] (0xc00090d130) Go away received\nI0329 21:59:24.403869 2592 log.go:172] (0xc00090d130) (0xc0009c8460) Stream removed, broadcasting: 1\nI0329 21:59:24.403887 2592 log.go:172] (0xc00090d130) (0xc0001d3360) Stream removed, broadcasting: 3\nI0329 21:59:24.403895 2592 log.go:172] (0xc00090d130) (0xc0009c8000) Stream removed, broadcasting: 5\n" Mar 29 21:59:24.407: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 29 21:59:24.407: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 29 21:59:24.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 21:59:24.613: INFO: stderr: "I0329 21:59:24.536473 2612 log.go:172] (0xc000956b00) (0xc000788140) Create stream\nI0329 21:59:24.536532 2612 log.go:172] (0xc000956b00) (0xc000788140) Stream added, broadcasting: 1\nI0329 21:59:24.539055 2612 log.go:172] (0xc000956b00) Reply frame received for 1\nI0329 21:59:24.539095 2612 log.go:172] (0xc000956b00) (0xc000788280) Create stream\nI0329 21:59:24.539104 2612 log.go:172] (0xc000956b00) (0xc000788280) Stream added, broadcasting: 3\nI0329 21:59:24.539959 2612 log.go:172] (0xc000956b00) Reply frame received for 3\nI0329 21:59:24.539994 2612 log.go:172] (0xc000956b00) (0xc0009e6000) Create stream\nI0329 21:59:24.540006 2612 log.go:172] (0xc000956b00) (0xc0009e6000) Stream added, broadcasting: 5\nI0329 21:59:24.540826 2612 log.go:172] (0xc000956b00) Reply frame received for 5\nI0329 21:59:24.606697 2612 log.go:172] (0xc000956b00) Data frame received for 5\nI0329 21:59:24.606727 2612 log.go:172] (0xc0009e6000) (5) Data frame handling\nI0329 21:59:24.606739 2612 log.go:172] (0xc0009e6000) (5) Data frame sent\nI0329 21:59:24.606749 2612 log.go:172] (0xc000956b00) Data frame received for 5\nI0329 21:59:24.606758 2612 log.go:172] (0xc0009e6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0329 21:59:24.606769 2612 log.go:172] (0xc000956b00) Data frame received for 3\nI0329 21:59:24.606840 2612 log.go:172] (0xc000788280) (3) Data frame handling\nI0329 21:59:24.606879 2612 log.go:172] (0xc000788280) (3) Data frame sent\nI0329 21:59:24.606894 2612 log.go:172] (0xc000956b00) Data frame received for 3\nI0329 21:59:24.606907 2612 log.go:172] (0xc000788280) (3) Data frame handling\nI0329 21:59:24.608310 2612 log.go:172] (0xc000956b00) Data frame received for 1\nI0329 21:59:24.608330 2612 log.go:172] (0xc000788140) (1) Data frame handling\nI0329 21:59:24.608346 2612 log.go:172] (0xc000788140) (1) Data frame sent\nI0329 21:59:24.608365 2612 log.go:172] (0xc000956b00) (0xc000788140) Stream removed, broadcasting: 1\nI0329 21:59:24.608453 2612 log.go:172] (0xc000956b00) Go away received\nI0329 21:59:24.608756 2612 log.go:172] (0xc000956b00) (0xc000788140) Stream removed, broadcasting: 1\nI0329 21:59:24.608781 2612 log.go:172] (0xc000956b00) (0xc000788280) Stream removed, broadcasting: 3\nI0329 21:59:24.608792 2612 log.go:172] (0xc000956b00) (0xc0009e6000) Stream removed, broadcasting: 5\n" Mar 29 21:59:24.613: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 29 21:59:24.613: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 29 21:59:24.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 21:59:24.827: INFO: stderr: "I0329 21:59:24.736816 2633 log.go:172] (0xc000a1e790) (0xc000910280) Create stream\nI0329 21:59:24.736883 2633 log.go:172] (0xc000a1e790) (0xc000910280) Stream added, broadcasting: 1\nI0329 21:59:24.741000 2633 log.go:172] (0xc000a1e790) Reply frame received for 1\nI0329 21:59:24.741028 2633 log.go:172] (0xc000a1e790) (0xc000600780) Create stream\nI0329 21:59:24.741036 2633 log.go:172] (0xc000a1e790) (0xc000600780) Stream added, broadcasting: 3\nI0329 21:59:24.742222 2633 log.go:172] (0xc000a1e790) Reply frame received for 3\nI0329 21:59:24.742270 2633 log.go:172] (0xc000a1e790) (0xc0002bf540) Create stream\nI0329 21:59:24.742281 2633 log.go:172] (0xc000a1e790) (0xc0002bf540) Stream added, broadcasting: 5\nI0329 21:59:24.743317 2633 log.go:172] (0xc000a1e790) Reply frame received for 5\nI0329 21:59:24.821272 2633 log.go:172] (0xc000a1e790) Data frame received for 5\nI0329 21:59:24.821313 2633 log.go:172] (0xc0002bf540) (5) Data frame handling\nI0329 21:59:24.821327 2633 log.go:172] (0xc0002bf540) (5) Data frame sent\nI0329 21:59:24.821338 2633 log.go:172] (0xc000a1e790) Data frame received for 5\nI0329 21:59:24.821346 2633 log.go:172] (0xc0002bf540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0329 21:59:24.821372 2633 log.go:172] (0xc000a1e790) Data frame received for 3\nI0329 21:59:24.821383 2633 log.go:172] (0xc000600780) (3) Data frame handling\nI0329 21:59:24.821399 2633 log.go:172] (0xc000600780) (3) Data frame sent\nI0329 21:59:24.821417 2633 log.go:172] (0xc000a1e790) Data frame received for 3\nI0329 21:59:24.821426 2633 log.go:172] (0xc000600780) (3) Data frame handling\nI0329 21:59:24.822748 2633 log.go:172] (0xc000a1e790) Data frame received for 1\nI0329 21:59:24.822774 2633 log.go:172] (0xc000910280) (1) Data frame handling\nI0329 21:59:24.822792 2633 log.go:172] (0xc000910280) (1) Data frame sent\nI0329 21:59:24.822815 2633 log.go:172] (0xc000a1e790) (0xc000910280) Stream removed, broadcasting: 1\nI0329 21:59:24.822977 2633 log.go:172] (0xc000a1e790) Go away received\nI0329 21:59:24.823189 2633 log.go:172] (0xc000a1e790) (0xc000910280) Stream removed, broadcasting: 1\nI0329 21:59:24.823209 2633 log.go:172] (0xc000a1e790) (0xc000600780) Stream removed, broadcasting: 3\nI0329 21:59:24.823219 2633 log.go:172] (0xc000a1e790) (0xc0002bf540) Stream removed, broadcasting: 5\n" Mar 29 21:59:24.827: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 29 21:59:24.827: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 29 21:59:24.831: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 29 21:59:34.836: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:59:34.836: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 29 21:59:34.836: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 29 21:59:34.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 29 21:59:35.055: INFO: stderr: "I0329 21:59:34.971948 2655 log.go:172] (0xc0009ad080) (0xc000bd25a0) Create stream\nI0329 21:59:34.972559 2655 log.go:172] (0xc0009ad080) (0xc000bd25a0) Stream added, broadcasting: 1\nI0329 21:59:34.976492 2655 log.go:172] (0xc0009ad080) Reply frame received for 1\nI0329 21:59:34.976527 2655 log.go:172] (0xc0009ad080) (0xc000763c20) Create stream\nI0329 21:59:34.976536 2655 log.go:172] (0xc0009ad080) (0xc000763c20) Stream added, broadcasting: 3\nI0329 21:59:34.977439 2655 log.go:172] (0xc0009ad080) Reply frame received for 3\nI0329 21:59:34.977466 2655 log.go:172] (0xc0009ad080) (0xc0006f0820) Create stream\nI0329 21:59:34.977475 2655 log.go:172] (0xc0009ad080) (0xc0006f0820) Stream added, broadcasting: 5\nI0329 21:59:34.978247 2655 log.go:172] (0xc0009ad080) Reply frame received for 5\nI0329 21:59:35.049753 2655 log.go:172] (0xc0009ad080) Data frame received for 5\nI0329 21:59:35.049792 2655 log.go:172] (0xc0006f0820) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0329 21:59:35.049840 2655 log.go:172] (0xc0009ad080) Data frame received for 3\nI0329 21:59:35.049881 2655 log.go:172] (0xc000763c20) (3) Data frame handling\nI0329 21:59:35.049914 2655 log.go:172] (0xc000763c20) (3) Data frame sent\nI0329 21:59:35.049941 2655 log.go:172] (0xc0009ad080) Data frame received for 3\nI0329 21:59:35.049961 2655 log.go:172] (0xc000763c20) (3) Data frame handling\nI0329 21:59:35.050003 2655 log.go:172] (0xc0006f0820) (5) Data frame sent\nI0329 21:59:35.050044 2655 log.go:172] (0xc0009ad080) Data frame received for 5\nI0329 21:59:35.050070 2655 log.go:172] (0xc0006f0820) (5) Data frame handling\nI0329 21:59:35.051639 2655 log.go:172] (0xc0009ad080) Data frame received for 1\nI0329 21:59:35.051658 2655 log.go:172] (0xc000bd25a0) (1) Data frame handling\nI0329 21:59:35.051683 2655 log.go:172] (0xc000bd25a0) (1) Data frame sent\nI0329 21:59:35.051691 2655 log.go:172] (0xc0009ad080) (0xc000bd25a0) Stream removed, broadcasting: 1\nI0329 21:59:35.051727 2655 log.go:172] (0xc0009ad080) Go away received\nI0329 21:59:35.051945 2655 log.go:172] (0xc0009ad080) (0xc000bd25a0) Stream removed, broadcasting: 1\nI0329 21:59:35.051960 2655 log.go:172] (0xc0009ad080) (0xc000763c20) Stream removed, broadcasting: 3\nI0329 21:59:35.051967 2655 log.go:172] (0xc0009ad080) (0xc0006f0820) Stream removed, broadcasting: 5\n" Mar 29 21:59:35.055: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 29 21:59:35.055: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 29 21:59:35.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 29 21:59:35.295: INFO: stderr: "I0329 21:59:35.182665 2675 log.go:172] (0xc0001056b0) (0xc000689a40) Create stream\nI0329 21:59:35.182735 2675 log.go:172] (0xc0001056b0) (0xc000689a40) Stream added, broadcasting: 1\nI0329 21:59:35.186206 2675 log.go:172] (0xc0001056b0) Reply frame received for 1\nI0329 21:59:35.186248 2675 log.go:172] (0xc0001056b0) (0xc000752000) Create stream\nI0329 21:59:35.186267 2675 log.go:172] (0xc0001056b0) (0xc000752000) Stream added, broadcasting: 3\nI0329 21:59:35.187456 2675 log.go:172] (0xc0001056b0) Reply frame received for 3\nI0329 21:59:35.187510 2675 log.go:172] (0xc0001056b0) (0xc000752140) Create stream\nI0329 21:59:35.187531 2675 log.go:172] (0xc0001056b0) (0xc000752140) Stream added, broadcasting: 5\nI0329 21:59:35.188635 2675 log.go:172] (0xc0001056b0) Reply frame received for 5\nI0329 21:59:35.263093 2675 log.go:172] (0xc0001056b0) Data frame received for 5\nI0329 21:59:35.263113 2675 log.go:172] (0xc000752140) (5) Data frame handling\nI0329 21:59:35.263127 2675 log.go:172] (0xc000752140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0329 21:59:35.289622 2675 log.go:172] (0xc0001056b0) Data frame received for 3\nI0329 21:59:35.289652 2675 log.go:172] (0xc000752000) (3) Data frame handling\nI0329 21:59:35.289681 2675 log.go:172] (0xc000752000) (3) Data frame sent\nI0329 21:59:35.289696 2675 log.go:172] (0xc0001056b0) Data frame received for 3\nI0329 21:59:35.289710 2675 log.go:172] (0xc000752000) (3) Data frame handling\nI0329 21:59:35.289934 2675 log.go:172] (0xc0001056b0) Data frame received for 5\nI0329 21:59:35.289953 2675 log.go:172] (0xc000752140) (5) Data frame handling\nI0329 21:59:35.291446 2675 log.go:172] (0xc0001056b0) Data frame received for 1\nI0329 21:59:35.291472 2675 log.go:172] (0xc000689a40) (1) Data frame handling\nI0329 21:59:35.291488 2675 log.go:172] (0xc000689a40) (1) Data frame sent\nI0329 21:59:35.291505 2675 log.go:172] (0xc0001056b0) (0xc000689a40) Stream removed, broadcasting: 1\nI0329 21:59:35.291532 2675 log.go:172] (0xc0001056b0) Go away received\nI0329 21:59:35.291782 2675 log.go:172] (0xc0001056b0) (0xc000689a40) Stream removed, broadcasting: 1\nI0329 21:59:35.291803 2675 log.go:172] (0xc0001056b0) (0xc000752000) Stream removed, broadcasting: 3\nI0329 21:59:35.291808 2675 log.go:172] (0xc0001056b0) (0xc000752140) Stream removed, broadcasting: 5\n" Mar 29 21:59:35.295: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 29 21:59:35.295: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 29 21:59:35.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 29 21:59:35.521: INFO: stderr: "I0329 21:59:35.423412 2696 log.go:172] (0xc0000f5550) (0xc0006a5e00) Create stream\nI0329 21:59:35.423465 2696 log.go:172] (0xc0000f5550) (0xc0006a5e00) Stream added, broadcasting: 1\nI0329 21:59:35.426089 2696 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0329 21:59:35.426137 2696 log.go:172] (0xc0000f5550) (0xc0005ba6e0) Create stream\nI0329 21:59:35.426152 2696 log.go:172] (0xc0000f5550) (0xc0005ba6e0) Stream added, broadcasting: 3\nI0329 21:59:35.427343 2696 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0329 21:59:35.427387 2696 log.go:172] (0xc0000f5550) (0xc0002854a0) Create stream\nI0329 21:59:35.427408 2696 log.go:172] (0xc0000f5550) (0xc0002854a0) Stream added, broadcasting: 5\nI0329 21:59:35.428476 2696 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0329 21:59:35.483293 2696 log.go:172] (0xc0000f5550) Data frame received for 5\nI0329 21:59:35.483315 2696 log.go:172] (0xc0002854a0) (5) Data frame handling\nI0329 21:59:35.483328 2696 log.go:172] (0xc0002854a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0329 21:59:35.514116 2696 log.go:172] (0xc0000f5550) Data frame received for 3\nI0329 21:59:35.514160 2696 log.go:172] (0xc0005ba6e0) (3) Data frame handling\nI0329 21:59:35.514186 2696 log.go:172] (0xc0005ba6e0) (3) Data frame sent\nI0329 21:59:35.514218 2696 log.go:172] (0xc0000f5550) Data frame received for 3\nI0329 21:59:35.514234 2696 log.go:172] (0xc0005ba6e0) (3) Data frame handling\nI0329 21:59:35.514251 2696 log.go:172] (0xc0000f5550) Data frame received for 5\nI0329 21:59:35.514269 2696 log.go:172] (0xc0002854a0) (5) Data frame handling\nI0329 21:59:35.515771 2696 log.go:172] (0xc0000f5550) Data frame received for 1\nI0329 21:59:35.515788 2696 log.go:172] (0xc0006a5e00) (1) Data frame handling\nI0329 21:59:35.515798 2696 log.go:172] (0xc0006a5e00) (1) Data frame sent\nI0329 21:59:35.516055 2696 log.go:172] (0xc0000f5550) (0xc0006a5e00) Stream removed, broadcasting: 1\nI0329 21:59:35.516156 2696 log.go:172] (0xc0000f5550) Go away received\nI0329 21:59:35.516525 2696 log.go:172] (0xc0000f5550) (0xc0006a5e00) Stream removed, broadcasting: 1\nI0329 21:59:35.516548 2696 log.go:172] (0xc0000f5550) (0xc0005ba6e0) Stream removed, broadcasting: 3\nI0329 21:59:35.516561 2696 log.go:172] (0xc0000f5550) (0xc0002854a0) Stream removed, broadcasting: 5\n" Mar 29 21:59:35.521: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 29 21:59:35.521: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 29 21:59:35.521: INFO: Waiting for statefulset status.replicas updated to 0 Mar 29 21:59:35.525: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 29 21:59:45.533: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 29 21:59:45.533: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 29 21:59:45.533: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 29 21:59:45.549: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:45.549: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:45.549: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:45.549: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:45.549: INFO: Mar 29 21:59:45.549: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 29 21:59:46.556: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:46.556: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:46.556: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:46.556: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:46.556: INFO: Mar 29 21:59:46.556: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 29 21:59:47.562: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:47.562: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:47.562: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:47.562: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:47.562: INFO: Mar 29 21:59:47.562: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 29 21:59:48.566: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:48.566: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:48.566: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:48.566: INFO: Mar 29 21:59:48.566: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 29 21:59:49.571: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:49.571: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:49.571: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:49.571: INFO: Mar 29 21:59:49.571: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 29 21:59:50.576: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:50.576: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:50.576: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:50.576: INFO: Mar 29 21:59:50.576: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 29 21:59:51.580: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:51.580: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:51.580: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:51.580: INFO: Mar 29 21:59:51.580: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 29 21:59:52.585: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:52.585: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:52.585: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:52.585: INFO: Mar 29 21:59:52.585: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 29 21:59:53.589: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:53.589: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:53.589: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:53.589: INFO: Mar 29 21:59:53.589: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 29 21:59:54.593: INFO: POD NODE PHASE GRACE CONDITIONS Mar 29 21:59:54.593: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:58:53 +0000 UTC }] Mar 29 21:59:54.594: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 21:59:14 +0000 UTC }] Mar 29 21:59:54.594: INFO: Mar 29 21:59:54.594: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8419 Mar 29 21:59:55.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 21:59:55.740: INFO: rc: 1 Mar 29 21:59:55.740: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 29 22:00:05.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:00:05.838: INFO: rc: 1 Mar 29 22:00:05.838: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:00:15.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:00:15.932: INFO: rc: 1 Mar 29 22:00:15.932: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:00:25.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:00:26.036: INFO: rc: 1 Mar 29 22:00:26.036: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:00:36.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:00:36.132: INFO: rc: 1 Mar 29 22:00:36.132: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:00:46.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:00:46.232: INFO: rc: 1 Mar 29 22:00:46.232: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:00:56.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:00:56.345: INFO: rc: 1 Mar 29 22:00:56.345: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:01:06.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:01:06.465: INFO: rc: 1 Mar 29 22:01:06.465: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:01:16.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:01:16.563: INFO: rc: 1 Mar 29 22:01:16.564: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:01:26.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:01:26.662: INFO: rc: 1 Mar 29 22:01:26.662: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:01:36.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:01:36.765: INFO: rc: 1 Mar 29 22:01:36.765: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:01:46.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:01:46.861: INFO: rc: 1 Mar 29 22:01:46.861: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:01:56.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:01:56.953: INFO: rc: 1 Mar 29 22:01:56.954: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:02:06.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:02:07.062: INFO: rc: 1 Mar 29 22:02:07.062: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:02:17.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:02:17.167: INFO: rc: 1 Mar 29 22:02:17.168: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:02:27.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:02:27.282: INFO: rc: 1 Mar 29 22:02:27.282: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:02:37.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:02:37.364: INFO: rc: 1 Mar 29 22:02:37.364: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:02:47.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:02:47.457: INFO: rc: 1 Mar 29 22:02:47.457: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:02:57.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:02:57.546: INFO: rc: 1 Mar 29 22:02:57.546: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:03:07.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:03:07.639: INFO: rc: 1 Mar 29 22:03:07.639: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:03:17.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:03:18.020: INFO: rc: 1 Mar 29 22:03:18.020: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:03:28.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:03:30.137: INFO: rc: 1 Mar 29 22:03:30.137: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:03:40.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:03:40.235: INFO: rc: 1 Mar 29 22:03:40.235: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:03:50.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:03:50.329: INFO: rc: 1 Mar 29 22:03:50.329: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:04:00.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:04:00.418: INFO: rc: 1 Mar 29 22:04:00.418: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:04:10.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:04:10.516: INFO: rc: 1 Mar 29 22:04:10.516: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:04:20.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:04:20.613: INFO: rc: 1 Mar 29 22:04:20.613: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:04:30.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:04:30.716: INFO: rc: 1 Mar 29 22:04:30.716: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:04:40.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:04:40.800: INFO: rc: 1 Mar 29 22:04:40.801: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:04:50.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:04:50.897: INFO: rc: 1 Mar 29 22:04:50.897: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 29 22:05:00.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 29 22:05:00.997: INFO: rc: 1 Mar 29 22:05:00.997: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Mar 29 22:05:00.997: INFO: Scaling statefulset ss to 0 Mar 29 22:05:01.006: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 29 22:05:01.009: INFO: Deleting all statefulset in ns statefulset-8419 Mar 29 22:05:01.011: INFO: Scaling statefulset ss to 0 Mar 29 22:05:01.020: INFO: Waiting for statefulset status.replicas updated to 0 Mar 29 22:05:01.022: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:05:01.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8419" for this suite. • [SLOW TEST:367.483 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":235,"skipped":3947,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:05:01.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 29 22:05:08.077: INFO: 5 pods remaining Mar 29 22:05:08.077: INFO: 0 pods has nil DeletionTimestamp Mar 29 22:05:08.077: INFO: Mar 29 22:05:08.710: INFO: 0 pods remaining Mar 29 22:05:08.710: INFO: 0 pods has nil DeletionTimestamp Mar 29 22:05:08.710: INFO: STEP: Gathering metrics W0329 22:05:09.644938 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 29 22:05:09.645: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:05:09.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-352" for this suite. • [SLOW TEST:8.851 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":236,"skipped":3951,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:05:09.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-090c3bb3-13b3-4790-a5e3-75e801fa872d STEP: Creating a pod to test consume secrets Mar 29 22:05:10.755: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e6f69c43-f63c-4bc5-ae16-88d2c6c1ab65" in namespace "projected-490" to be "success or failure" Mar 29 22:05:10.850: INFO: Pod "pod-projected-secrets-e6f69c43-f63c-4bc5-ae16-88d2c6c1ab65": Phase="Pending", Reason="", readiness=false. Elapsed: 95.166142ms Mar 29 22:05:12.854: INFO: Pod "pod-projected-secrets-e6f69c43-f63c-4bc5-ae16-88d2c6c1ab65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099213527s Mar 29 22:05:14.859: INFO: Pod "pod-projected-secrets-e6f69c43-f63c-4bc5-ae16-88d2c6c1ab65": Phase="Running", Reason="", readiness=true. Elapsed: 4.103388614s Mar 29 22:05:16.863: INFO: Pod "pod-projected-secrets-e6f69c43-f63c-4bc5-ae16-88d2c6c1ab65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107745512s STEP: Saw pod success Mar 29 22:05:16.863: INFO: Pod "pod-projected-secrets-e6f69c43-f63c-4bc5-ae16-88d2c6c1ab65" satisfied condition "success or failure" Mar 29 22:05:16.867: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-e6f69c43-f63c-4bc5-ae16-88d2c6c1ab65 container projected-secret-volume-test: STEP: delete the pod Mar 29 22:05:16.905: INFO: Waiting for pod pod-projected-secrets-e6f69c43-f63c-4bc5-ae16-88d2c6c1ab65 to disappear Mar 29 22:05:16.914: INFO: Pod pod-projected-secrets-e6f69c43-f63c-4bc5-ae16-88d2c6c1ab65 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:05:16.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-490" for this suite. • [SLOW TEST:7.028 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3964,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:05:16.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 29 22:05:17.030: INFO: Waiting up to 5m0s for pod "pod-2869b9d9-3160-469e-99ae-7cf1d557b217" in namespace "emptydir-7481" to be "success or failure" Mar 29 22:05:17.054: INFO: Pod "pod-2869b9d9-3160-469e-99ae-7cf1d557b217": Phase="Pending", Reason="", readiness=false. Elapsed: 23.901353ms Mar 29 22:05:19.058: INFO: Pod "pod-2869b9d9-3160-469e-99ae-7cf1d557b217": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028078734s Mar 29 22:05:21.062: INFO: Pod "pod-2869b9d9-3160-469e-99ae-7cf1d557b217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031907118s STEP: Saw pod success Mar 29 22:05:21.062: INFO: Pod "pod-2869b9d9-3160-469e-99ae-7cf1d557b217" satisfied condition "success or failure" Mar 29 22:05:21.065: INFO: Trying to get logs from node jerma-worker pod pod-2869b9d9-3160-469e-99ae-7cf1d557b217 container test-container: STEP: delete the pod Mar 29 22:05:21.087: INFO: Waiting for pod pod-2869b9d9-3160-469e-99ae-7cf1d557b217 to disappear Mar 29 22:05:21.091: INFO: Pod pod-2869b9d9-3160-469e-99ae-7cf1d557b217 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:05:21.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7481" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":4022,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:05:21.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-43de0f4f-48bf-4620-8e08-5d16b41b3508 STEP: Creating a pod to test consume secrets Mar 29 22:05:21.189: INFO: Waiting up to 5m0s for pod "pod-secrets-8a0dfc0a-d9f2-44a5-b777-e03b1dec878c" in namespace "secrets-8195" to be "success or failure" Mar 29 22:05:21.203: INFO: Pod "pod-secrets-8a0dfc0a-d9f2-44a5-b777-e03b1dec878c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.951382ms Mar 29 22:05:23.208: INFO: Pod "pod-secrets-8a0dfc0a-d9f2-44a5-b777-e03b1dec878c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018611169s Mar 29 22:05:25.211: INFO: Pod "pod-secrets-8a0dfc0a-d9f2-44a5-b777-e03b1dec878c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021968903s STEP: Saw pod success Mar 29 22:05:25.211: INFO: Pod "pod-secrets-8a0dfc0a-d9f2-44a5-b777-e03b1dec878c" satisfied condition "success or failure" Mar 29 22:05:25.214: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-8a0dfc0a-d9f2-44a5-b777-e03b1dec878c container secret-volume-test: STEP: delete the pod Mar 29 22:05:25.265: INFO: Waiting for pod pod-secrets-8a0dfc0a-d9f2-44a5-b777-e03b1dec878c to disappear Mar 29 22:05:25.289: INFO: Pod pod-secrets-8a0dfc0a-d9f2-44a5-b777-e03b1dec878c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:05:25.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8195" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":4028,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:05:25.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 22:05:25.368: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 29 22:05:28.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3613 create -f -' Mar 29 22:05:31.204: INFO: stderr: "" Mar 29 22:05:31.204: INFO: stdout: "e2e-test-crd-publish-openapi-5685-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 29 22:05:31.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3613 delete e2e-test-crd-publish-openapi-5685-crds test-foo' Mar 29 22:05:31.309: INFO: stderr: "" Mar 29 22:05:31.309: INFO: stdout: "e2e-test-crd-publish-openapi-5685-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 29 22:05:31.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3613 apply -f -' Mar 29 22:05:31.570: INFO: stderr: "" Mar 29 22:05:31.570: INFO: stdout: "e2e-test-crd-publish-openapi-5685-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 29 22:05:31.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3613 delete e2e-test-crd-publish-openapi-5685-crds test-foo' Mar 29 22:05:31.678: INFO: stderr: "" Mar 29 22:05:31.678: INFO: stdout: "e2e-test-crd-publish-openapi-5685-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 29 22:05:31.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3613 create -f -' Mar 29 22:05:31.885: INFO: rc: 1 Mar 29 22:05:31.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3613 apply -f -' Mar 29 22:05:32.075: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 29 22:05:32.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3613 create -f -' Mar 29 22:05:32.316: INFO: rc: 1 Mar 29 22:05:32.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3613 apply -f -' Mar 29 22:05:32.523: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 29 22:05:32.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5685-crds' Mar 29 22:05:32.746: INFO: stderr: "" Mar 29 22:05:32.747: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5685-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 29 22:05:32.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5685-crds.metadata' Mar 29 22:05:32.978: INFO: stderr: "" Mar 29 22:05:32.978: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5685-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 29 22:05:32.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5685-crds.spec' Mar 29 22:05:33.211: INFO: stderr: "" Mar 29 22:05:33.211: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5685-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 29 22:05:33.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5685-crds.spec.bars' Mar 29 22:05:33.428: INFO: stderr: "" Mar 29 22:05:33.428: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5685-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 29 22:05:33.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5685-crds.spec.bars2' Mar 29 22:05:33.657: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:05:36.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3613" for this suite. • [SLOW TEST:11.265 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":240,"skipped":4034,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:05:36.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 22:05:36.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2797' Mar 29 22:05:37.006: INFO: stderr: "" Mar 29 22:05:37.006: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 29 22:05:37.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2797' Mar 29 22:05:37.308: INFO: stderr: "" Mar 29 22:05:37.308: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 29 22:05:38.319: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 22:05:38.319: INFO: Found 0 / 1 Mar 29 22:05:39.344: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 22:05:39.344: INFO: Found 0 / 1 Mar 29 22:05:40.333: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 22:05:40.333: INFO: Found 1 / 1 Mar 29 22:05:40.333: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 29 22:05:40.336: INFO: Selector matched 1 pods for map[app:agnhost] Mar 29 22:05:40.336: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 29 22:05:40.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-k299r --namespace=kubectl-2797' Mar 29 22:05:40.464: INFO: stderr: "" Mar 29 22:05:40.464: INFO: stdout: "Name: agnhost-master-k299r\nNamespace: kubectl-2797\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Sun, 29 Mar 2020 22:05:37 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.87\nIPs:\n IP: 10.244.1.87\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://154f582f57825e286ec63bbebc0437c29f30e927846cc5de811ad6ef5934d16d\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 29 Mar 2020 22:05:39 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-l5cdl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-l5cdl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-l5cdl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-2797/agnhost-master-k299r to jerma-worker\n Normal Pulled 2s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" Mar 29 22:05:40.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2797' Mar 29 22:05:40.577: INFO: stderr: "" Mar 29 22:05:40.577: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2797\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-k299r\n" Mar 29 22:05:40.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2797' Mar 29 22:05:40.684: INFO: stderr: "" Mar 29 22:05:40.684: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2797\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.103.197.135\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.87:6379\nSession Affinity: None\nEvents: \n" Mar 29 22:05:40.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 29 22:05:40.816: INFO: stderr: "" Mar 29 22:05:40.816: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sun, 29 Mar 2020 22:05:31 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 29 Mar 2020 22:05:27 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 29 Mar 2020 22:05:27 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 29 Mar 2020 22:05:27 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 29 Mar 2020 22:05:27 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 14d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 14d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 14d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 14d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 14d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 14d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 29 22:05:40.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2797' Mar 29 22:05:40.913: INFO: stderr: "" Mar 29 22:05:40.913: INFO: stdout: "Name: kubectl-2797\nLabels: e2e-framework=kubectl\n e2e-run=45d88c4e-a8a4-4b33-89ed-9a8e86a8499e\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:05:40.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2797" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":241,"skipped":4042,"failed":0} ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:05:40.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9455/configmap-test-cb107b99-6e34-41d5-8a20-d7ebf6acccb1 STEP: Creating a pod to test consume configMaps Mar 29 22:05:41.016: INFO: Waiting up to 5m0s for pod "pod-configmaps-878da00d-e663-4ffa-bafc-c699df53e571" in namespace "configmap-9455" to be "success or failure" Mar 29 22:05:41.030: INFO: Pod "pod-configmaps-878da00d-e663-4ffa-bafc-c699df53e571": Phase="Pending", Reason="", readiness=false. Elapsed: 14.098121ms Mar 29 22:05:43.034: INFO: Pod "pod-configmaps-878da00d-e663-4ffa-bafc-c699df53e571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017721476s Mar 29 22:05:45.038: INFO: Pod "pod-configmaps-878da00d-e663-4ffa-bafc-c699df53e571": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021472951s STEP: Saw pod success Mar 29 22:05:45.038: INFO: Pod "pod-configmaps-878da00d-e663-4ffa-bafc-c699df53e571" satisfied condition "success or failure" Mar 29 22:05:45.040: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-878da00d-e663-4ffa-bafc-c699df53e571 container env-test: STEP: delete the pod Mar 29 22:05:45.063: INFO: Waiting for pod pod-configmaps-878da00d-e663-4ffa-bafc-c699df53e571 to disappear Mar 29 22:05:45.090: INFO: Pod pod-configmaps-878da00d-e663-4ffa-bafc-c699df53e571 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:05:45.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9455" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4042,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:05:45.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0329 22:06:25.963123 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 29 22:06:25.963: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:06:25.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3609" for this suite. • [SLOW TEST:40.873 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":243,"skipped":4044,"failed":0} SS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:06:25.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-822bd3d6-9fe4-4411-b6d7-b78f8d3346f0 in namespace container-probe-7473 Mar 29 22:06:30.055: INFO: Started pod liveness-822bd3d6-9fe4-4411-b6d7-b78f8d3346f0 in namespace container-probe-7473 STEP: checking the pod's current state and verifying that restartCount is present Mar 29 22:06:30.058: INFO: Initial restart count of pod liveness-822bd3d6-9fe4-4411-b6d7-b78f8d3346f0 is 0 Mar 29 22:06:50.099: INFO: Restart count of pod container-probe-7473/liveness-822bd3d6-9fe4-4411-b6d7-b78f8d3346f0 is now 1 (20.04180312s elapsed) Mar 29 22:07:10.142: INFO: Restart count of pod container-probe-7473/liveness-822bd3d6-9fe4-4411-b6d7-b78f8d3346f0 is now 2 (40.084867519s elapsed) Mar 29 22:07:30.211: INFO: Restart count of pod container-probe-7473/liveness-822bd3d6-9fe4-4411-b6d7-b78f8d3346f0 is now 3 (1m0.153315895s elapsed) Mar 29 22:07:50.260: INFO: Restart count of pod container-probe-7473/liveness-822bd3d6-9fe4-4411-b6d7-b78f8d3346f0 is now 4 (1m20.202106206s elapsed) Mar 29 22:09:02.443: INFO: Restart count of pod container-probe-7473/liveness-822bd3d6-9fe4-4411-b6d7-b78f8d3346f0 is now 5 (2m32.385784182s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:09:02.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7473" for this suite. • [SLOW TEST:156.537 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4046,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:09:02.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 29 22:09:10.692: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 29 22:09:10.712: INFO: Pod pod-with-poststart-http-hook still exists Mar 29 22:09:12.713: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 29 22:09:12.717: INFO: Pod pod-with-poststart-http-hook still exists Mar 29 22:09:14.713: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 29 22:09:14.716: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:09:14.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1306" for this suite. • [SLOW TEST:12.215 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4053,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:09:14.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 29 22:09:14.780: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d3cd1de-2c91-4244-8d16-8a7bcdd93a46" in namespace "projected-5846" to be "success or failure" Mar 29 22:09:14.827: INFO: Pod "downwardapi-volume-0d3cd1de-2c91-4244-8d16-8a7bcdd93a46": Phase="Pending", Reason="", readiness=false. Elapsed: 46.796707ms Mar 29 22:09:16.845: INFO: Pod "downwardapi-volume-0d3cd1de-2c91-4244-8d16-8a7bcdd93a46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064935805s Mar 29 22:09:18.849: INFO: Pod "downwardapi-volume-0d3cd1de-2c91-4244-8d16-8a7bcdd93a46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069282753s STEP: Saw pod success Mar 29 22:09:18.849: INFO: Pod "downwardapi-volume-0d3cd1de-2c91-4244-8d16-8a7bcdd93a46" satisfied condition "success or failure" Mar 29 22:09:18.852: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0d3cd1de-2c91-4244-8d16-8a7bcdd93a46 container client-container: STEP: delete the pod Mar 29 22:09:18.887: INFO: Waiting for pod downwardapi-volume-0d3cd1de-2c91-4244-8d16-8a7bcdd93a46 to disappear Mar 29 22:09:18.904: INFO: Pod downwardapi-volume-0d3cd1de-2c91-4244-8d16-8a7bcdd93a46 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:09:18.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5846" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4079,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:09:18.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 29 22:09:18.985: INFO: Waiting up to 5m0s for pod "pod-3b94cfbe-9423-4abf-b339-875ea62c6083" in namespace "emptydir-7802" to be "success or failure" Mar 29 22:09:18.992: INFO: Pod "pod-3b94cfbe-9423-4abf-b339-875ea62c6083": Phase="Pending", Reason="", readiness=false. Elapsed: 7.565569ms Mar 29 22:09:20.996: INFO: Pod "pod-3b94cfbe-9423-4abf-b339-875ea62c6083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011440286s Mar 29 22:09:23.000: INFO: Pod "pod-3b94cfbe-9423-4abf-b339-875ea62c6083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01551814s STEP: Saw pod success Mar 29 22:09:23.000: INFO: Pod "pod-3b94cfbe-9423-4abf-b339-875ea62c6083" satisfied condition "success or failure" Mar 29 22:09:23.003: INFO: Trying to get logs from node jerma-worker pod pod-3b94cfbe-9423-4abf-b339-875ea62c6083 container test-container: STEP: delete the pod Mar 29 22:09:23.062: INFO: Waiting for pod pod-3b94cfbe-9423-4abf-b339-875ea62c6083 to disappear Mar 29 22:09:23.076: INFO: Pod pod-3b94cfbe-9423-4abf-b339-875ea62c6083 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:09:23.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7802" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4086,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:09:23.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 29 22:09:23.157: INFO: Waiting up to 5m0s for pod "downward-api-25d05227-5ca2-4460-8564-77a98bd349c3" in namespace "downward-api-2063" to be "success or failure" Mar 29 22:09:23.187: INFO: Pod "downward-api-25d05227-5ca2-4460-8564-77a98bd349c3": Phase="Pending", Reason="", readiness=false. Elapsed: 29.856375ms Mar 29 22:09:25.190: INFO: Pod "downward-api-25d05227-5ca2-4460-8564-77a98bd349c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032667346s Mar 29 22:09:27.194: INFO: Pod "downward-api-25d05227-5ca2-4460-8564-77a98bd349c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036882124s STEP: Saw pod success Mar 29 22:09:27.194: INFO: Pod "downward-api-25d05227-5ca2-4460-8564-77a98bd349c3" satisfied condition "success or failure" Mar 29 22:09:27.197: INFO: Trying to get logs from node jerma-worker pod downward-api-25d05227-5ca2-4460-8564-77a98bd349c3 container dapi-container: STEP: delete the pod Mar 29 22:09:27.252: INFO: Waiting for pod downward-api-25d05227-5ca2-4460-8564-77a98bd349c3 to disappear Mar 29 22:09:27.262: INFO: Pod downward-api-25d05227-5ca2-4460-8564-77a98bd349c3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:09:27.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2063" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4090,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:09:27.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 29 22:09:32.408: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:09:32.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5909" for this suite. • [SLOW TEST:5.202 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":249,"skipped":4096,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:09:32.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:09:46.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4928" for this suite. • [SLOW TEST:14.151 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":250,"skipped":4138,"failed":0} [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:09:46.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:09:46.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5309" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4138,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:09:46.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 29 22:09:47.501: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 29 22:09:49.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721116587, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721116587, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721116587, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721116587, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 22:09:52.553: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 22:09:52.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:09:53.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-470" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.011 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":252,"skipped":4144,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:09:53.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-8b9158be-921a-4ddc-af65-7b7661d83f3d STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-8b9158be-921a-4ddc-af65-7b7661d83f3d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:09:59.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3641" for this suite. • [SLOW TEST:6.133 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4149,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:09:59.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-4388 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4388 to expose endpoints map[] Mar 29 22:10:00.090: INFO: Get endpoints failed (15.270811ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 29 22:10:01.094: INFO: successfully validated that service endpoint-test2 in namespace services-4388 exposes endpoints map[] (1.019390861s elapsed) STEP: Creating pod pod1 in namespace services-4388 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4388 to expose endpoints map[pod1:[80]] Mar 29 22:10:04.219: INFO: successfully validated that service endpoint-test2 in namespace services-4388 exposes endpoints map[pod1:[80]] (3.116897765s elapsed) STEP: Creating pod pod2 in namespace services-4388 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4388 to expose endpoints map[pod1:[80] pod2:[80]] Mar 29 22:10:07.436: INFO: successfully validated that service endpoint-test2 in namespace services-4388 exposes endpoints map[pod1:[80] pod2:[80]] (3.213987009s elapsed) STEP: Deleting pod pod1 in namespace services-4388 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4388 to expose endpoints map[pod2:[80]] Mar 29 22:10:08.472: INFO: successfully validated that service endpoint-test2 in namespace services-4388 exposes endpoints map[pod2:[80]] (1.031518398s elapsed) STEP: Deleting pod pod2 in namespace services-4388 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4388 to expose endpoints map[] Mar 29 22:10:09.484: INFO: successfully validated that service endpoint-test2 in namespace services-4388 exposes endpoints map[] (1.007039093s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:10:09.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4388" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.550 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":254,"skipped":4154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:10:09.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 22:10:09.955: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 22:10:11.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721116609, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721116609, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721116610, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721116609, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 22:10:15.001: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:10:25.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2295" for this suite. STEP: Destroying namespace "webhook-2295-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.913 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":255,"skipped":4224,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:10:25.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 29 22:10:25.534: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 29 22:10:25.554: INFO: Waiting for terminating namespaces to be deleted... Mar 29 22:10:25.557: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 29 22:10:25.564: INFO: sample-webhook-deployment-5f65f8c764-zg5zx from webhook-2295 started at 2020-03-29 22:10:10 +0000 UTC (1 container statuses recorded) Mar 29 22:10:25.564: INFO: Container sample-webhook ready: true, restart count 0 Mar 29 22:10:25.564: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 22:10:25.564: INFO: Container kindnet-cni ready: true, restart count 0 Mar 29 22:10:25.564: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 22:10:25.564: INFO: Container kube-proxy ready: true, restart count 0 Mar 29 22:10:25.564: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 29 22:10:25.572: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 22:10:25.572: INFO: Container kindnet-cni ready: true, restart count 0 Mar 29 22:10:25.572: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Mar 29 22:10:25.572: INFO: Container kube-bench ready: false, restart count 0 Mar 29 22:10:25.572: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 22:10:25.572: INFO: Container kube-proxy ready: true, restart count 0 Mar 29 22:10:25.572: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Mar 29 22:10:25.572: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1600e5e184a4f39b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:10:26.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-838" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":256,"skipped":4225,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:10:26.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 29 22:10:29.706: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:10:29.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4041" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4236,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:10:29.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 22:10:29.806: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:10:30.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3821" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":258,"skipped":4239,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:10:30.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 29 22:10:30.914: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:10:47.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9385" for this suite. • [SLOW TEST:17.049 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":259,"skipped":4252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:10:47.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 29 22:10:47.956: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:10:54.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1486" for this suite. • [SLOW TEST:6.316 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":260,"skipped":4275,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:10:54.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 22:10:54.309: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 29 22:10:54.320: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:10:54.355: INFO: Number of nodes with available pods: 0 Mar 29 22:10:54.355: INFO: Node jerma-worker is running more than one daemon pod Mar 29 22:10:55.359: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:10:55.362: INFO: Number of nodes with available pods: 0 Mar 29 22:10:55.362: INFO: Node jerma-worker is running more than one daemon pod Mar 29 22:10:56.360: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:10:56.364: INFO: Number of nodes with available pods: 0 Mar 29 22:10:56.364: INFO: Node jerma-worker is running more than one daemon pod Mar 29 22:10:57.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:10:57.361: INFO: Number of nodes with available pods: 0 Mar 29 22:10:57.361: INFO: Node jerma-worker is running more than one daemon pod Mar 29 22:10:58.360: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:10:58.363: INFO: Number of nodes with available pods: 2 Mar 29 22:10:58.363: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 29 22:10:58.427: INFO: Wrong image for pod: daemon-set-7mpj6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:10:58.427: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:10:58.435: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:10:59.445: INFO: Wrong image for pod: daemon-set-7mpj6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:10:59.445: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:10:59.448: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:00.440: INFO: Wrong image for pod: daemon-set-7mpj6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:00.440: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:00.444: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:01.440: INFO: Wrong image for pod: daemon-set-7mpj6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:01.440: INFO: Pod daemon-set-7mpj6 is not available Mar 29 22:11:01.440: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:01.443: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:02.440: INFO: Pod daemon-set-8htj9 is not available Mar 29 22:11:02.440: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:02.443: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:03.440: INFO: Pod daemon-set-8htj9 is not available Mar 29 22:11:03.440: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:03.444: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:04.440: INFO: Pod daemon-set-8htj9 is not available Mar 29 22:11:04.440: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:04.445: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:05.440: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:05.443: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:06.439: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:06.439: INFO: Pod daemon-set-dnj59 is not available Mar 29 22:11:06.442: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:07.440: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:07.440: INFO: Pod daemon-set-dnj59 is not available Mar 29 22:11:07.444: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:08.440: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:08.440: INFO: Pod daemon-set-dnj59 is not available Mar 29 22:11:08.444: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:09.440: INFO: Wrong image for pod: daemon-set-dnj59. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 29 22:11:09.440: INFO: Pod daemon-set-dnj59 is not available Mar 29 22:11:09.444: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:10.440: INFO: Pod daemon-set-wgx9m is not available Mar 29 22:11:10.444: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 29 22:11:10.448: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:10.451: INFO: Number of nodes with available pods: 1 Mar 29 22:11:10.451: INFO: Node jerma-worker2 is running more than one daemon pod Mar 29 22:11:11.456: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:11.460: INFO: Number of nodes with available pods: 1 Mar 29 22:11:11.460: INFO: Node jerma-worker2 is running more than one daemon pod Mar 29 22:11:12.456: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 29 22:11:12.459: INFO: Number of nodes with available pods: 2 Mar 29 22:11:12.459: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2705, will wait for the garbage collector to delete the pods Mar 29 22:11:12.552: INFO: Deleting DaemonSet.extensions daemon-set took: 6.085386ms Mar 29 22:11:12.852: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27961ms Mar 29 22:11:19.301: INFO: Number of nodes with available pods: 0 Mar 29 22:11:19.301: INFO: Number of running nodes: 0, number of available pods: 0 Mar 29 22:11:19.304: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2705/daemonsets","resourceVersion":"3805770"},"items":null} Mar 29 22:11:19.306: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2705/pods","resourceVersion":"3805770"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:11:19.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2705" for this suite. • [SLOW TEST:25.096 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":261,"skipped":4292,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:11:19.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 29 22:11:19.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9103' Mar 29 22:11:19.666: INFO: stderr: "" Mar 29 22:11:19.666: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 29 22:11:19.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9103' Mar 29 22:11:19.771: INFO: stderr: "" Mar 29 22:11:19.771: INFO: stdout: "update-demo-nautilus-pw7jm update-demo-nautilus-wzdbk " Mar 29 22:11:19.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw7jm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:19.874: INFO: stderr: "" Mar 29 22:11:19.874: INFO: stdout: "" Mar 29 22:11:19.874: INFO: update-demo-nautilus-pw7jm is created but not running Mar 29 22:11:24.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9103' Mar 29 22:11:24.989: INFO: stderr: "" Mar 29 22:11:24.989: INFO: stdout: "update-demo-nautilus-pw7jm update-demo-nautilus-wzdbk " Mar 29 22:11:24.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw7jm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:25.095: INFO: stderr: "" Mar 29 22:11:25.095: INFO: stdout: "true" Mar 29 22:11:25.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw7jm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:25.195: INFO: stderr: "" Mar 29 22:11:25.195: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 29 22:11:25.195: INFO: validating pod update-demo-nautilus-pw7jm Mar 29 22:11:25.199: INFO: got data: { "image": "nautilus.jpg" } Mar 29 22:11:25.199: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 29 22:11:25.199: INFO: update-demo-nautilus-pw7jm is verified up and running Mar 29 22:11:25.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzdbk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:25.287: INFO: stderr: "" Mar 29 22:11:25.287: INFO: stdout: "true" Mar 29 22:11:25.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzdbk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:25.371: INFO: stderr: "" Mar 29 22:11:25.371: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 29 22:11:25.371: INFO: validating pod update-demo-nautilus-wzdbk Mar 29 22:11:25.375: INFO: got data: { "image": "nautilus.jpg" } Mar 29 22:11:25.375: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 29 22:11:25.375: INFO: update-demo-nautilus-wzdbk is verified up and running STEP: scaling down the replication controller Mar 29 22:11:25.378: INFO: scanned /root for discovery docs: Mar 29 22:11:25.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9103' Mar 29 22:11:26.558: INFO: stderr: "" Mar 29 22:11:26.558: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 29 22:11:26.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9103' Mar 29 22:11:26.663: INFO: stderr: "" Mar 29 22:11:26.663: INFO: stdout: "update-demo-nautilus-pw7jm update-demo-nautilus-wzdbk " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 29 22:11:31.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9103' Mar 29 22:11:31.782: INFO: stderr: "" Mar 29 22:11:31.782: INFO: stdout: "update-demo-nautilus-pw7jm " Mar 29 22:11:31.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw7jm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:31.903: INFO: stderr: "" Mar 29 22:11:31.903: INFO: stdout: "true" Mar 29 22:11:31.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw7jm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:31.982: INFO: stderr: "" Mar 29 22:11:31.982: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 29 22:11:31.982: INFO: validating pod update-demo-nautilus-pw7jm Mar 29 22:11:31.985: INFO: got data: { "image": "nautilus.jpg" } Mar 29 22:11:31.985: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 29 22:11:31.985: INFO: update-demo-nautilus-pw7jm is verified up and running STEP: scaling up the replication controller Mar 29 22:11:31.988: INFO: scanned /root for discovery docs: Mar 29 22:11:31.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9103' Mar 29 22:11:33.124: INFO: stderr: "" Mar 29 22:11:33.124: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 29 22:11:33.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9103' Mar 29 22:11:33.220: INFO: stderr: "" Mar 29 22:11:33.220: INFO: stdout: "update-demo-nautilus-6z5tr update-demo-nautilus-pw7jm " Mar 29 22:11:33.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z5tr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:33.318: INFO: stderr: "" Mar 29 22:11:33.318: INFO: stdout: "" Mar 29 22:11:33.318: INFO: update-demo-nautilus-6z5tr is created but not running Mar 29 22:11:38.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9103' Mar 29 22:11:38.416: INFO: stderr: "" Mar 29 22:11:38.416: INFO: stdout: "update-demo-nautilus-6z5tr update-demo-nautilus-pw7jm " Mar 29 22:11:38.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z5tr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:38.505: INFO: stderr: "" Mar 29 22:11:38.505: INFO: stdout: "true" Mar 29 22:11:38.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z5tr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:38.609: INFO: stderr: "" Mar 29 22:11:38.609: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 29 22:11:38.609: INFO: validating pod update-demo-nautilus-6z5tr Mar 29 22:11:38.614: INFO: got data: { "image": "nautilus.jpg" } Mar 29 22:11:38.614: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 29 22:11:38.614: INFO: update-demo-nautilus-6z5tr is verified up and running Mar 29 22:11:38.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw7jm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:38.705: INFO: stderr: "" Mar 29 22:11:38.705: INFO: stdout: "true" Mar 29 22:11:38.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw7jm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9103' Mar 29 22:11:38.804: INFO: stderr: "" Mar 29 22:11:38.804: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 29 22:11:38.804: INFO: validating pod update-demo-nautilus-pw7jm Mar 29 22:11:38.807: INFO: got data: { "image": "nautilus.jpg" } Mar 29 22:11:38.808: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 29 22:11:38.808: INFO: update-demo-nautilus-pw7jm is verified up and running STEP: using delete to clean up resources Mar 29 22:11:38.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9103' Mar 29 22:11:38.923: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 29 22:11:38.923: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 29 22:11:38.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9103' Mar 29 22:11:39.015: INFO: stderr: "No resources found in kubectl-9103 namespace.\n" Mar 29 22:11:39.015: INFO: stdout: "" Mar 29 22:11:39.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9103 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 29 22:11:39.120: INFO: stderr: "" Mar 29 22:11:39.120: INFO: stdout: "update-demo-nautilus-6z5tr\nupdate-demo-nautilus-pw7jm\n" Mar 29 22:11:39.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9103' Mar 29 22:11:39.790: INFO: stderr: "No resources found in kubectl-9103 namespace.\n" Mar 29 22:11:39.790: INFO: stdout: "" Mar 29 22:11:39.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9103 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 29 22:11:39.883: INFO: stderr: "" Mar 29 22:11:39.883: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:11:39.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9103" for this suite. • [SLOW TEST:20.587 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":262,"skipped":4312,"failed":0} SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:11:39.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 29 22:11:40.419: INFO: Waiting up to 5m0s for pod "downward-api-b8b38965-0640-48fe-8abb-7c567d699e0e" in namespace "downward-api-3889" to be "success or failure" Mar 29 22:11:40.435: INFO: Pod "downward-api-b8b38965-0640-48fe-8abb-7c567d699e0e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.052504ms Mar 29 22:11:42.464: INFO: Pod "downward-api-b8b38965-0640-48fe-8abb-7c567d699e0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044454113s Mar 29 22:11:44.468: INFO: Pod "downward-api-b8b38965-0640-48fe-8abb-7c567d699e0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048361408s STEP: Saw pod success Mar 29 22:11:44.468: INFO: Pod "downward-api-b8b38965-0640-48fe-8abb-7c567d699e0e" satisfied condition "success or failure" Mar 29 22:11:44.470: INFO: Trying to get logs from node jerma-worker2 pod downward-api-b8b38965-0640-48fe-8abb-7c567d699e0e container dapi-container: STEP: delete the pod Mar 29 22:11:44.561: INFO: Waiting for pod downward-api-b8b38965-0640-48fe-8abb-7c567d699e0e to disappear Mar 29 22:11:44.620: INFO: Pod downward-api-b8b38965-0640-48fe-8abb-7c567d699e0e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:11:44.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3889" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4314,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:11:44.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 29 22:11:44.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4402' Mar 29 22:11:44.910: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 29 22:11:44.910: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 29 22:11:44.915: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 29 22:11:44.942: INFO: scanned /root for discovery docs: Mar 29 22:11:44.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4402' Mar 29 22:12:00.789: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 29 22:12:00.789: INFO: stdout: "Created e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a\nScaling up e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 29 22:12:00.789: INFO: stdout: "Created e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a\nScaling up e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 29 22:12:00.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4402' Mar 29 22:12:00.883: INFO: stderr: "" Mar 29 22:12:00.883: INFO: stdout: "e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a-t4tzm " Mar 29 22:12:00.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a-t4tzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4402' Mar 29 22:12:00.993: INFO: stderr: "" Mar 29 22:12:00.994: INFO: stdout: "true" Mar 29 22:12:00.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a-t4tzm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4402' Mar 29 22:12:01.084: INFO: stderr: "" Mar 29 22:12:01.084: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 29 22:12:01.084: INFO: e2e-test-httpd-rc-1bbdafaa562bfe61a1712d3eedbb168a-t4tzm is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698 Mar 29 22:12:01.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4402' Mar 29 22:12:01.188: INFO: stderr: "" Mar 29 22:12:01.188: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:12:01.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4402" for this suite. • [SLOW TEST:16.582 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":264,"skipped":4325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:12:01.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-26d6c866-4def-46cc-a792-27f7c54bf3fa STEP: Creating a pod to test consume secrets Mar 29 22:12:01.302: INFO: Waiting up to 5m0s for pod "pod-secrets-09e4ba60-73c3-421e-aef8-16165209e875" in namespace "secrets-4645" to be "success or failure" Mar 29 22:12:01.326: INFO: Pod "pod-secrets-09e4ba60-73c3-421e-aef8-16165209e875": Phase="Pending", Reason="", readiness=false. Elapsed: 23.965676ms Mar 29 22:12:03.398: INFO: Pod "pod-secrets-09e4ba60-73c3-421e-aef8-16165209e875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095627204s Mar 29 22:12:05.402: INFO: Pod "pod-secrets-09e4ba60-73c3-421e-aef8-16165209e875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099844405s STEP: Saw pod success Mar 29 22:12:05.402: INFO: Pod "pod-secrets-09e4ba60-73c3-421e-aef8-16165209e875" satisfied condition "success or failure" Mar 29 22:12:05.405: INFO: Trying to get logs from node jerma-worker pod pod-secrets-09e4ba60-73c3-421e-aef8-16165209e875 container secret-volume-test: STEP: delete the pod Mar 29 22:12:05.437: INFO: Waiting for pod pod-secrets-09e4ba60-73c3-421e-aef8-16165209e875 to disappear Mar 29 22:12:05.442: INFO: Pod pod-secrets-09e4ba60-73c3-421e-aef8-16165209e875 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:12:05.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4645" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4384,"failed":0} ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:12:05.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 29 22:12:05.513: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 29 22:12:14.569: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:12:14.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9091" for this suite. • [SLOW TEST:9.133 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4384,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:12:14.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:12:45.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9152" for this suite. STEP: Destroying namespace "nsdeletetest-9785" for this suite. Mar 29 22:12:45.844: INFO: Namespace nsdeletetest-9785 was already deleted STEP: Destroying namespace "nsdeletetest-8136" for this suite. • [SLOW TEST:31.276 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":267,"skipped":4390,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:12:45.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 29 22:12:50.012: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:12:50.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-545" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:12:50.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-3ed47f25-a534-428c-8016-4641a0fff59a STEP: Creating a pod to test consume secrets Mar 29 22:12:50.131: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e625f7e5-ff04-4eac-99c4-c8e4a132e82f" in namespace "projected-8211" to be "success or failure" Mar 29 22:12:50.155: INFO: Pod "pod-projected-secrets-e625f7e5-ff04-4eac-99c4-c8e4a132e82f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.97389ms Mar 29 22:12:52.158: INFO: Pod "pod-projected-secrets-e625f7e5-ff04-4eac-99c4-c8e4a132e82f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027213438s Mar 29 22:12:54.162: INFO: Pod "pod-projected-secrets-e625f7e5-ff04-4eac-99c4-c8e4a132e82f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030744352s STEP: Saw pod success Mar 29 22:12:54.162: INFO: Pod "pod-projected-secrets-e625f7e5-ff04-4eac-99c4-c8e4a132e82f" satisfied condition "success or failure" Mar 29 22:12:54.174: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e625f7e5-ff04-4eac-99c4-c8e4a132e82f container projected-secret-volume-test: STEP: delete the pod Mar 29 22:12:54.193: INFO: Waiting for pod pod-projected-secrets-e625f7e5-ff04-4eac-99c4-c8e4a132e82f to disappear Mar 29 22:12:54.198: INFO: Pod pod-projected-secrets-e625f7e5-ff04-4eac-99c4-c8e4a132e82f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:12:54.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8211" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4438,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:12:54.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-3e0078ea-2b9f-4cd0-a0e1-fcdbcf33e4d0 STEP: Creating a pod to test consume configMaps Mar 29 22:12:54.338: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6d53b3e7-aaf4-441c-ad61-6f95934b3b11" in namespace "projected-4303" to be "success or failure" Mar 29 22:12:54.341: INFO: Pod "pod-projected-configmaps-6d53b3e7-aaf4-441c-ad61-6f95934b3b11": Phase="Pending", Reason="", readiness=false. Elapsed: 3.694251ms Mar 29 22:12:56.346: INFO: Pod "pod-projected-configmaps-6d53b3e7-aaf4-441c-ad61-6f95934b3b11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00798994s Mar 29 22:12:58.350: INFO: Pod "pod-projected-configmaps-6d53b3e7-aaf4-441c-ad61-6f95934b3b11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012207717s STEP: Saw pod success Mar 29 22:12:58.350: INFO: Pod "pod-projected-configmaps-6d53b3e7-aaf4-441c-ad61-6f95934b3b11" satisfied condition "success or failure" Mar 29 22:12:58.354: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6d53b3e7-aaf4-441c-ad61-6f95934b3b11 container projected-configmap-volume-test: STEP: delete the pod Mar 29 22:12:58.376: INFO: Waiting for pod pod-projected-configmaps-6d53b3e7-aaf4-441c-ad61-6f95934b3b11 to disappear Mar 29 22:12:58.380: INFO: Pod pod-projected-configmaps-6d53b3e7-aaf4-441c-ad61-6f95934b3b11 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:12:58.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4303" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4471,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:12:58.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464 STEP: creating an pod Mar 29 22:12:58.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-4586 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 29 22:12:58.591: INFO: stderr: "" Mar 29 22:12:58.591: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 29 22:12:58.591: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 29 22:12:58.591: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4586" to be "running and ready, or succeeded" Mar 29 22:12:58.602: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.53936ms Mar 29 22:13:00.606: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01426234s Mar 29 22:13:02.610: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.018564529s Mar 29 22:13:02.610: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 29 22:13:02.610: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 29 22:13:02.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4586' Mar 29 22:13:02.728: INFO: stderr: "" Mar 29 22:13:02.728: INFO: stdout: "I0329 22:13:00.743501 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/g6nn 428\nI0329 22:13:00.943652 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/rn9d 573\nI0329 22:13:01.143756 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/4qz 288\nI0329 22:13:01.343658 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/r8m 290\nI0329 22:13:01.543710 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/ghs 263\nI0329 22:13:01.743710 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/h2f7 270\nI0329 22:13:01.943691 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/6xfn 284\nI0329 22:13:02.143688 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/2sgc 491\nI0329 22:13:02.343670 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/q4f 447\nI0329 22:13:02.543720 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/cpmf 432\n" STEP: limiting log lines Mar 29 22:13:02.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4586 --tail=1' Mar 29 22:13:02.839: INFO: stderr: "" Mar 29 22:13:02.839: INFO: stdout: "I0329 22:13:02.743707 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/76k 278\n" Mar 29 22:13:02.839: INFO: got output "I0329 22:13:02.743707 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/76k 278\n" STEP: limiting log bytes Mar 29 22:13:02.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4586 --limit-bytes=1' Mar 29 22:13:02.958: INFO: stderr: "" Mar 29 22:13:02.958: INFO: stdout: "I" Mar 29 22:13:02.958: INFO: got output "I" STEP: exposing timestamps Mar 29 22:13:02.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4586 --tail=1 --timestamps' Mar 29 22:13:03.065: INFO: stderr: "" Mar 29 22:13:03.065: INFO: stdout: "2020-03-29T22:13:02.94386826Z I0329 22:13:02.943685 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/24lq 283\n" Mar 29 22:13:03.065: INFO: got output "2020-03-29T22:13:02.94386826Z I0329 22:13:02.943685 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/24lq 283\n" STEP: restricting to a time range Mar 29 22:13:05.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4586 --since=1s' Mar 29 22:13:05.678: INFO: stderr: "" Mar 29 22:13:05.678: INFO: stdout: "I0329 22:13:04.743689 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/n4fx 267\nI0329 22:13:04.943652 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/mtl 374\nI0329 22:13:05.143767 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/dmjq 585\nI0329 22:13:05.343708 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/xwqx 392\nI0329 22:13:05.543677 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/4z8 566\n" Mar 29 22:13:05.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4586 --since=24h' Mar 29 22:13:05.782: INFO: stderr: "" Mar 29 22:13:05.782: INFO: stdout: "I0329 22:13:00.743501 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/g6nn 428\nI0329 22:13:00.943652 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/rn9d 573\nI0329 22:13:01.143756 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/4qz 288\nI0329 22:13:01.343658 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/r8m 290\nI0329 22:13:01.543710 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/ghs 263\nI0329 22:13:01.743710 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/h2f7 270\nI0329 22:13:01.943691 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/6xfn 284\nI0329 22:13:02.143688 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/2sgc 491\nI0329 22:13:02.343670 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/q4f 447\nI0329 22:13:02.543720 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/cpmf 432\nI0329 22:13:02.743707 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/76k 278\nI0329 22:13:02.943685 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/24lq 283\nI0329 22:13:03.143713 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/fjj 395\nI0329 22:13:03.343663 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/qps4 317\nI0329 22:13:03.543704 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/xdkh 373\nI0329 22:13:03.743670 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/gdqz 453\nI0329 22:13:03.943701 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/jsn 211\nI0329 22:13:04.143703 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/wlsw 554\nI0329 22:13:04.343686 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/b955 292\nI0329 22:13:04.543651 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/9kjx 277\nI0329 22:13:04.743689 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/n4fx 267\nI0329 22:13:04.943652 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/mtl 374\nI0329 22:13:05.143767 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/dmjq 585\nI0329 22:13:05.343708 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/xwqx 392\nI0329 22:13:05.543677 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/4z8 566\nI0329 22:13:05.743681 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/47r 267\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470 Mar 29 22:13:05.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4586' Mar 29 22:13:19.530: INFO: stderr: "" Mar 29 22:13:19.530: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:13:19.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4586" for this suite. • [SLOW TEST:21.152 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":271,"skipped":4478,"failed":0} S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:13:19.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 22:13:23.654: INFO: Waiting up to 5m0s for pod "client-envvars-2f84b378-83e6-4908-92bd-5acc26fdbdd4" in namespace "pods-9695" to be "success or failure" Mar 29 22:13:23.659: INFO: Pod "client-envvars-2f84b378-83e6-4908-92bd-5acc26fdbdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.214815ms Mar 29 22:13:25.728: INFO: Pod "client-envvars-2f84b378-83e6-4908-92bd-5acc26fdbdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074207595s Mar 29 22:13:27.733: INFO: Pod "client-envvars-2f84b378-83e6-4908-92bd-5acc26fdbdd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078292385s STEP: Saw pod success Mar 29 22:13:27.733: INFO: Pod "client-envvars-2f84b378-83e6-4908-92bd-5acc26fdbdd4" satisfied condition "success or failure" Mar 29 22:13:27.735: INFO: Trying to get logs from node jerma-worker pod client-envvars-2f84b378-83e6-4908-92bd-5acc26fdbdd4 container env3cont: STEP: delete the pod Mar 29 22:13:27.765: INFO: Waiting for pod client-envvars-2f84b378-83e6-4908-92bd-5acc26fdbdd4 to disappear Mar 29 22:13:27.804: INFO: Pod client-envvars-2f84b378-83e6-4908-92bd-5acc26fdbdd4 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:13:27.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9695" for this suite. • [SLOW TEST:8.273 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4479,"failed":0} [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:13:27.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-01c97bbe-c1de-4759-b02d-9cdf143d6804 in namespace container-probe-1403 Mar 29 22:13:31.909: INFO: Started pod test-webserver-01c97bbe-c1de-4759-b02d-9cdf143d6804 in namespace container-probe-1403 STEP: checking the pod's current state and verifying that restartCount is present Mar 29 22:13:31.912: INFO: Initial restart count of pod test-webserver-01c97bbe-c1de-4759-b02d-9cdf143d6804 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:17:32.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1403" for this suite. • [SLOW TEST:244.733 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:17:32.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 29 22:17:32.612: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:17:33.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5724" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":274,"skipped":4502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:17:33.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 29 22:17:38.568: INFO: Successfully updated pod "annotationupdatec57b9b40-7360-4b70-a67f-ed9fcfe162b7" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:17:40.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9587" for this suite. • [SLOW TEST:6.681 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4526,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:17:40.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-5445d34b-4531-4ba6-acda-83d61179de36 STEP: Creating configMap with name cm-test-opt-upd-e9d25f6e-c175-4c8f-b0a4-cd659bc5bb9a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5445d34b-4531-4ba6-acda-83d61179de36 STEP: Updating configmap cm-test-opt-upd-e9d25f6e-c175-4c8f-b0a4-cd659bc5bb9a STEP: Creating configMap with name cm-test-opt-create-a065c637-92d9-4a09-a78d-84a9ceafe9ee STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:17:48.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5137" for this suite. • [SLOW TEST:8.219 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4540,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:17:48.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 29 22:17:48.894: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 29 22:17:48.906: INFO: Waiting for terminating namespaces to be deleted... Mar 29 22:17:48.909: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 29 22:17:48.914: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 22:17:48.914: INFO: Container kindnet-cni ready: true, restart count 0 Mar 29 22:17:48.914: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 22:17:48.914: INFO: Container kube-proxy ready: true, restart count 0 Mar 29 22:17:48.914: INFO: pod-configmaps-e00ffb3a-eb7d-498b-9bf5-d4707ef698f7 from configmap-5137 started at 2020-03-29 22:17:40 +0000 UTC (3 container statuses recorded) Mar 29 22:17:48.914: INFO: Container createcm-volume-test ready: true, restart count 0 Mar 29 22:17:48.914: INFO: Container delcm-volume-test ready: true, restart count 0 Mar 29 22:17:48.914: INFO: Container updcm-volume-test ready: true, restart count 0 Mar 29 22:17:48.914: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 29 22:17:48.919: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 22:17:48.919: INFO: Container kindnet-cni ready: true, restart count 0 Mar 29 22:17:48.919: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Mar 29 22:17:48.919: INFO: Container kube-bench ready: false, restart count 0 Mar 29 22:17:48.919: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 29 22:17:48.919: INFO: Container kube-proxy ready: true, restart count 0 Mar 29 22:17:48.919: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Mar 29 22:17:48.919: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-04352122-9b7b-4a40-bae3-0c16f3cd2f93 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-04352122-9b7b-4a40-bae3-0c16f3cd2f93 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-04352122-9b7b-4a40-bae3-0c16f3cd2f93 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:22:59.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4413" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:310.282 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":277,"skipped":4543,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 29 22:22:59.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 29 22:22:59.738: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 29 22:23:01.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721117379, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721117379, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721117379, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721117379, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 29 22:23:04.809: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 29 22:23:05.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2089" for this suite. STEP: Destroying namespace "webhook-2089-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.404 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":278,"skipped":4552,"failed":0} SSSSSSSSSSSSSMar 29 22:23:05.500: INFO: Running AfterSuite actions on all nodes Mar 29 22:23:05.500: INFO: Running AfterSuite actions on node 1 Mar 29 22:23:05.500: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0} Ran 278 of 4843 Specs in 4596.070 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS