I0426 21:07:01.879204 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0426 21:07:01.879423 6 e2e.go:109] Starting e2e run "5573ce9e-d66e-4c38-bb9a-e60f768b3ded" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587935220 - Will randomize all specs Will run 278 of 4842 specs Apr 26 21:07:01.944: INFO: >>> kubeConfig: /root/.kube/config Apr 26 21:07:01.949: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 26 21:07:01.970: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 26 21:07:01.999: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 26 21:07:01.999: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 26 21:07:01.999: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 26 21:07:02.010: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 26 21:07:02.010: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 26 21:07:02.010: INFO: e2e test version: v1.17.4 Apr 26 21:07:02.011: INFO: kube-apiserver version: v1.17.2 Apr 26 21:07:02.011: INFO: >>> kubeConfig: /root/.kube/config Apr 26 21:07:02.016: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:07:02.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Apr 26 21:07:02.086: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:07:02.087: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:07:06.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7408" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":14,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:07:06.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 21:07:06.850: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 21:07:08.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532026, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532026, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532026, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532026, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 21:07:11.897: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:07:11.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-458" for this suite. STEP: Destroying namespace "webhook-458-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.853 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":2,"skipped":21,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:07:12.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 26 21:07:20.121: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 21:07:20.127: INFO: Pod pod-with-prestop-http-hook still exists Apr 26 21:07:22.128: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 21:07:22.132: INFO: Pod pod-with-prestop-http-hook still exists Apr 26 21:07:24.128: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 21:07:24.139: INFO: Pod pod-with-prestop-http-hook still exists Apr 26 21:07:26.128: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 21:07:26.153: INFO: Pod pod-with-prestop-http-hook still exists Apr 26 21:07:28.128: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 21:07:28.132: INFO: Pod pod-with-prestop-http-hook still exists Apr 26 21:07:30.128: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 26 21:07:30.132: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:07:30.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9055" for this suite. • [SLOW TEST:18.144 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":34,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:07:30.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 21:07:30.788: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 21:07:32.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532050, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532050, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532050, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532050, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 21:07:35.895: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:07:35.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5031" for this suite. STEP: Destroying namespace "webhook-5031-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.028 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":4,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:07:36.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Apr 26 21:07:36.339: INFO: Waiting up to 5m0s for pod "client-containers-3cc792ee-71c7-4d0c-b5eb-6f46e2944491" in namespace "containers-7062" to be "success or failure" Apr 26 21:07:36.342: INFO: Pod "client-containers-3cc792ee-71c7-4d0c-b5eb-6f46e2944491": Phase="Pending", Reason="", readiness=false. Elapsed: 3.566496ms Apr 26 21:07:38.347: INFO: Pod "client-containers-3cc792ee-71c7-4d0c-b5eb-6f46e2944491": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007858377s Apr 26 21:07:40.351: INFO: Pod "client-containers-3cc792ee-71c7-4d0c-b5eb-6f46e2944491": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012150289s STEP: Saw pod success Apr 26 21:07:40.351: INFO: Pod "client-containers-3cc792ee-71c7-4d0c-b5eb-6f46e2944491" satisfied condition "success or failure" Apr 26 21:07:40.354: INFO: Trying to get logs from node jerma-worker pod client-containers-3cc792ee-71c7-4d0c-b5eb-6f46e2944491 container test-container: STEP: delete the pod Apr 26 21:07:40.374: INFO: Waiting for pod client-containers-3cc792ee-71c7-4d0c-b5eb-6f46e2944491 to disappear Apr 26 21:07:40.378: INFO: Pod client-containers-3cc792ee-71c7-4d0c-b5eb-6f46e2944491 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:07:40.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7062" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":73,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:07:40.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:07:40.420: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:07:40.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2386" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":6,"skipped":83,"failed":0} SSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:07:41.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:07:41.089: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e3dfcad8-418c-49b0-84c2-c23e453ae7bf" in namespace "security-context-test-3642" to be "success or failure" Apr 26 21:07:41.134: INFO: Pod "alpine-nnp-false-e3dfcad8-418c-49b0-84c2-c23e453ae7bf": Phase="Pending", Reason="", readiness=false. Elapsed: 44.992954ms Apr 26 21:07:43.147: INFO: Pod "alpine-nnp-false-e3dfcad8-418c-49b0-84c2-c23e453ae7bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057623034s Apr 26 21:07:45.151: INFO: Pod "alpine-nnp-false-e3dfcad8-418c-49b0-84c2-c23e453ae7bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061584486s Apr 26 21:07:45.151: INFO: Pod "alpine-nnp-false-e3dfcad8-418c-49b0-84c2-c23e453ae7bf" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:07:45.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3642" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:07:45.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 26 21:07:49.786: INFO: Successfully updated pod "annotationupdate89e42893-0eff-4553-9131-1bcfc836cd80" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:07:51.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2376" for this suite. • [SLOW TEST:6.646 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:07:51.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 26 21:07:57.913: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3917 PodName:pod-sharedvolume-d5bcc512-4d0e-456c-a7ed-35dc66958c5f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 21:07:57.913: INFO: >>> kubeConfig: /root/.kube/config I0426 21:07:57.951806 6 log.go:172] (0xc0029fa4d0) (0xc002875ea0) Create stream I0426 21:07:57.951837 6 log.go:172] (0xc0029fa4d0) (0xc002875ea0) Stream added, broadcasting: 1 I0426 21:07:57.956867 6 log.go:172] (0xc0029fa4d0) Reply frame received for 1 I0426 21:07:57.956917 6 log.go:172] (0xc0029fa4d0) (0xc0029aaa00) Create stream I0426 21:07:57.956930 6 log.go:172] (0xc0029fa4d0) (0xc0029aaa00) Stream added, broadcasting: 3 I0426 21:07:57.958238 6 log.go:172] (0xc0029fa4d0) Reply frame received for 3 I0426 21:07:57.958284 6 log.go:172] (0xc0029fa4d0) (0xc0027c2000) Create stream I0426 21:07:57.958298 6 log.go:172] (0xc0029fa4d0) (0xc0027c2000) Stream added, broadcasting: 5 I0426 21:07:57.959186 6 log.go:172] (0xc0029fa4d0) Reply frame received for 5 I0426 21:07:58.013065 6 log.go:172] (0xc0029fa4d0) Data frame received for 3 I0426 21:07:58.013094 6 log.go:172] (0xc0029fa4d0) Data frame received for 5 I0426 21:07:58.013257 6 log.go:172] (0xc0027c2000) (5) Data frame handling I0426 21:07:58.013306 6 log.go:172] (0xc0029aaa00) (3) Data frame handling I0426 21:07:58.013333 6 log.go:172] (0xc0029aaa00) (3) Data frame sent I0426 21:07:58.013347 6 log.go:172] (0xc0029fa4d0) Data frame received for 3 I0426 21:07:58.013362 6 log.go:172] (0xc0029aaa00) (3) Data frame handling I0426 21:07:58.015089 6 log.go:172] (0xc0029fa4d0) Data frame received for 1 I0426 21:07:58.015105 6 log.go:172] (0xc002875ea0) (1) Data frame handling I0426 21:07:58.015121 6 log.go:172] (0xc002875ea0) (1) Data frame sent I0426 21:07:58.015140 6 log.go:172] (0xc0029fa4d0) (0xc002875ea0) Stream removed, broadcasting: 1 I0426 21:07:58.015328 6 log.go:172] (0xc0029fa4d0) Go away received I0426 21:07:58.015449 6 log.go:172] (0xc0029fa4d0) (0xc002875ea0) Stream removed, broadcasting: 1 I0426 21:07:58.015467 6 log.go:172] (0xc0029fa4d0) (0xc0029aaa00) Stream removed, broadcasting: 3 I0426 21:07:58.015475 6 log.go:172] (0xc0029fa4d0) (0xc0027c2000) Stream removed, broadcasting: 5 Apr 26 21:07:58.015: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:07:58.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3917" for this suite. • [SLOW TEST:6.211 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":9,"skipped":148,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:07:58.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:07:58.104: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-48c6eaef-babf-4d51-b6f4-2dbbcfa270c6" in namespace "security-context-test-4418" to be "success or failure" Apr 26 21:07:58.153: INFO: Pod "busybox-readonly-false-48c6eaef-babf-4d51-b6f4-2dbbcfa270c6": Phase="Pending", Reason="", readiness=false. Elapsed: 49.557281ms Apr 26 21:08:00.157: INFO: Pod "busybox-readonly-false-48c6eaef-babf-4d51-b6f4-2dbbcfa270c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052814446s Apr 26 21:08:02.160: INFO: Pod "busybox-readonly-false-48c6eaef-babf-4d51-b6f4-2dbbcfa270c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056058198s Apr 26 21:08:02.160: INFO: Pod "busybox-readonly-false-48c6eaef-babf-4d51-b6f4-2dbbcfa270c6" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:08:02.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4418" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":150,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:08:02.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:08:06.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2130" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":11,"skipped":163,"failed":0} ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:08:06.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-472fdc40-1880-4e9a-ab51-b383f60d024b STEP: Creating configMap with name cm-test-opt-upd-b9a93f18-6a54-4bd7-af47-463a610f84b6 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-472fdc40-1880-4e9a-ab51-b383f60d024b STEP: Updating configmap cm-test-opt-upd-b9a93f18-6a54-4bd7-af47-463a610f84b6 STEP: Creating configMap with name cm-test-opt-create-c1df4235-df13-4221-a4d9-f67d0801df52 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:08:15.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4915" for this suite. • [SLOW TEST:8.789 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":163,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:08:15.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 21:08:15.861: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 21:08:17.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532095, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532095, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532095, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532095, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 21:08:19.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532095, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532095, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532095, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723532095, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 21:08:22.944: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:08:33.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1619" for this suite. STEP: Destroying namespace "webhook-1619-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.003 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":13,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:08:33.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4776/secret-test-bdedf837-18ef-400a-80da-7b98a1ce1fad STEP: Creating a pod to test consume secrets Apr 26 21:08:33.286: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8616b7b-419e-4994-8098-09af56cb3ad9" in namespace "secrets-4776" to be "success or failure" Apr 26 21:08:33.327: INFO: Pod "pod-configmaps-c8616b7b-419e-4994-8098-09af56cb3ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 40.41614ms Apr 26 21:08:35.331: INFO: Pod "pod-configmaps-c8616b7b-419e-4994-8098-09af56cb3ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044471356s Apr 26 21:08:37.344: INFO: Pod "pod-configmaps-c8616b7b-419e-4994-8098-09af56cb3ad9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057918685s STEP: Saw pod success Apr 26 21:08:37.344: INFO: Pod "pod-configmaps-c8616b7b-419e-4994-8098-09af56cb3ad9" satisfied condition "success or failure" Apr 26 21:08:37.347: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c8616b7b-419e-4994-8098-09af56cb3ad9 container env-test: STEP: delete the pod Apr 26 21:08:37.384: INFO: Waiting for pod pod-configmaps-c8616b7b-419e-4994-8098-09af56cb3ad9 to disappear Apr 26 21:08:37.612: INFO: Pod pod-configmaps-c8616b7b-419e-4994-8098-09af56cb3ad9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:08:37.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4776" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:08:37.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:08:37.819: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a0bae4b-425b-4684-a9f7-6fb125200cca" in namespace "downward-api-7547" to be "success or failure" Apr 26 21:08:37.823: INFO: Pod "downwardapi-volume-0a0bae4b-425b-4684-a9f7-6fb125200cca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.972045ms Apr 26 21:08:39.867: INFO: Pod "downwardapi-volume-0a0bae4b-425b-4684-a9f7-6fb125200cca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047779648s Apr 26 21:08:41.871: INFO: Pod "downwardapi-volume-0a0bae4b-425b-4684-a9f7-6fb125200cca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051834263s STEP: Saw pod success Apr 26 21:08:41.871: INFO: Pod "downwardapi-volume-0a0bae4b-425b-4684-a9f7-6fb125200cca" satisfied condition "success or failure" Apr 26 21:08:41.877: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0a0bae4b-425b-4684-a9f7-6fb125200cca container client-container: STEP: delete the pod Apr 26 21:08:41.934: INFO: Waiting for pod downwardapi-volume-0a0bae4b-425b-4684-a9f7-6fb125200cca to disappear Apr 26 21:08:41.944: INFO: Pod downwardapi-volume-0a0bae4b-425b-4684-a9f7-6fb125200cca no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:08:41.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7547" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:08:41.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 26 21:08:46.567: INFO: Successfully updated pod "pod-update-activedeadlineseconds-00b71f74-acce-4efc-811c-420e2f1e218f" Apr 26 21:08:46.567: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-00b71f74-acce-4efc-811c-420e2f1e218f" in namespace "pods-7526" to be "terminated due to deadline exceeded" Apr 26 21:08:46.573: INFO: Pod "pod-update-activedeadlineseconds-00b71f74-acce-4efc-811c-420e2f1e218f": Phase="Running", Reason="", readiness=true. Elapsed: 6.767042ms Apr 26 21:08:48.578: INFO: Pod "pod-update-activedeadlineseconds-00b71f74-acce-4efc-811c-420e2f1e218f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.01097626s Apr 26 21:08:48.578: INFO: Pod "pod-update-activedeadlineseconds-00b71f74-acce-4efc-811c-420e2f1e218f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:08:48.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7526" for this suite. • [SLOW TEST:6.633 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":273,"failed":0} SSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:08:48.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Apr 26 21:08:48.687: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4525" to be "success or failure" Apr 26 21:08:48.692: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.719032ms Apr 26 21:08:50.695: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008291845s Apr 26 21:08:52.700: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.012858379s Apr 26 21:08:54.704: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017183866s STEP: Saw pod success Apr 26 21:08:54.704: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 26 21:08:54.708: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 26 21:08:54.729: INFO: Waiting for pod pod-host-path-test to disappear Apr 26 21:08:54.756: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:08:54.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4525" for this suite. • [SLOW TEST:6.177 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":277,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:08:54.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-41fba8fb-30b6-42ed-a9a2-042998d05f90 STEP: Creating a pod to test consume secrets Apr 26 21:08:54.897: INFO: Waiting up to 5m0s for pod "pod-secrets-691529d6-a7c5-4835-9314-a698a80e201b" in namespace "secrets-2423" to be "success or failure" Apr 26 21:08:54.901: INFO: Pod "pod-secrets-691529d6-a7c5-4835-9314-a698a80e201b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378004ms Apr 26 21:08:56.905: INFO: Pod "pod-secrets-691529d6-a7c5-4835-9314-a698a80e201b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008417743s Apr 26 21:08:58.910: INFO: Pod "pod-secrets-691529d6-a7c5-4835-9314-a698a80e201b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012724973s STEP: Saw pod success Apr 26 21:08:58.910: INFO: Pod "pod-secrets-691529d6-a7c5-4835-9314-a698a80e201b" satisfied condition "success or failure" Apr 26 21:08:58.913: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-691529d6-a7c5-4835-9314-a698a80e201b container secret-volume-test: STEP: delete the pod Apr 26 21:08:58.934: INFO: Waiting for pod pod-secrets-691529d6-a7c5-4835-9314-a698a80e201b to disappear Apr 26 21:08:58.951: INFO: Pod pod-secrets-691529d6-a7c5-4835-9314-a698a80e201b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:08:58.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2423" for this suite. STEP: Destroying namespace "secret-namespace-1574" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":297,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:08:59.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 26 21:08:59.067: INFO: Waiting up to 5m0s for pod "pod-afa8c2f1-ad5d-4ef9-8632-50c5a238cd51" in namespace "emptydir-7481" to be "success or failure" Apr 26 21:08:59.071: INFO: Pod "pod-afa8c2f1-ad5d-4ef9-8632-50c5a238cd51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095484ms Apr 26 21:09:01.083: INFO: Pod "pod-afa8c2f1-ad5d-4ef9-8632-50c5a238cd51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016006835s Apr 26 21:09:03.089: INFO: Pod "pod-afa8c2f1-ad5d-4ef9-8632-50c5a238cd51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022260759s STEP: Saw pod success Apr 26 21:09:03.089: INFO: Pod "pod-afa8c2f1-ad5d-4ef9-8632-50c5a238cd51" satisfied condition "success or failure" Apr 26 21:09:03.092: INFO: Trying to get logs from node jerma-worker2 pod pod-afa8c2f1-ad5d-4ef9-8632-50c5a238cd51 container test-container: STEP: delete the pod Apr 26 21:09:03.108: INFO: Waiting for pod pod-afa8c2f1-ad5d-4ef9-8632-50c5a238cd51 to disappear Apr 26 21:09:03.125: INFO: Pod pod-afa8c2f1-ad5d-4ef9-8632-50c5a238cd51 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:09:03.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7481" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:09:03.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 26 21:09:11.305: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 26 21:09:11.309: INFO: Pod pod-with-poststart-exec-hook still exists Apr 26 21:09:13.309: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 26 21:09:13.352: INFO: Pod pod-with-poststart-exec-hook still exists Apr 26 21:09:15.309: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 26 21:09:15.314: INFO: Pod pod-with-poststart-exec-hook still exists Apr 26 21:09:17.309: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 26 21:09:17.313: INFO: Pod pod-with-poststart-exec-hook still exists Apr 26 21:09:19.309: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 26 21:09:19.314: INFO: Pod pod-with-poststart-exec-hook still exists Apr 26 21:09:21.309: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 26 21:09:21.314: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:09:21.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3966" for this suite. • [SLOW TEST:18.190 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:09:21.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9440 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9440 STEP: creating replication controller externalsvc in namespace services-9440 I0426 21:09:21.505597 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9440, replica count: 2 I0426 21:09:24.556079 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 21:09:27.556359 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 26 21:09:27.616: INFO: Creating new exec pod Apr 26 21:09:31.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9440 execpodwr2x9 -- /bin/sh -x -c nslookup nodeport-service' Apr 26 21:09:34.156: INFO: stderr: "I0426 21:09:34.034253 28 log.go:172] (0xc0003db1e0) (0xc0003e28c0) Create stream\nI0426 21:09:34.034315 28 log.go:172] (0xc0003db1e0) (0xc0003e28c0) Stream added, broadcasting: 1\nI0426 21:09:34.037709 28 log.go:172] (0xc0003db1e0) Reply frame received for 1\nI0426 21:09:34.037756 28 log.go:172] (0xc0003db1e0) (0xc00070e000) Create stream\nI0426 21:09:34.037785 28 log.go:172] (0xc0003db1e0) (0xc00070e000) Stream added, broadcasting: 3\nI0426 21:09:34.038977 28 log.go:172] (0xc0003db1e0) Reply frame received for 3\nI0426 21:09:34.039048 28 log.go:172] (0xc0003db1e0) (0xc00075c000) Create stream\nI0426 21:09:34.039074 28 log.go:172] (0xc0003db1e0) (0xc00075c000) Stream added, broadcasting: 5\nI0426 21:09:34.040150 28 log.go:172] (0xc0003db1e0) Reply frame received for 5\nI0426 21:09:34.138602 28 log.go:172] (0xc0003db1e0) Data frame received for 5\nI0426 21:09:34.138633 28 log.go:172] (0xc00075c000) (5) Data frame handling\nI0426 21:09:34.138654 28 log.go:172] (0xc00075c000) (5) Data frame sent\n+ nslookup nodeport-service\nI0426 21:09:34.147671 28 log.go:172] (0xc0003db1e0) Data frame received for 3\nI0426 21:09:34.147737 28 log.go:172] (0xc00070e000) (3) Data frame handling\nI0426 21:09:34.147768 28 log.go:172] (0xc00070e000) (3) Data frame sent\nI0426 21:09:34.148747 28 log.go:172] (0xc0003db1e0) Data frame received for 3\nI0426 21:09:34.148759 28 log.go:172] (0xc00070e000) (3) Data frame handling\nI0426 21:09:34.148765 28 log.go:172] (0xc00070e000) (3) Data frame sent\nI0426 21:09:34.149320 28 log.go:172] (0xc0003db1e0) Data frame received for 5\nI0426 21:09:34.149356 28 log.go:172] (0xc00075c000) (5) Data frame handling\nI0426 21:09:34.149422 28 log.go:172] (0xc0003db1e0) Data frame received for 3\nI0426 21:09:34.149437 28 log.go:172] (0xc00070e000) (3) Data frame handling\nI0426 21:09:34.151118 28 log.go:172] (0xc0003db1e0) Data frame received for 1\nI0426 21:09:34.151142 28 log.go:172] (0xc0003e28c0) (1) Data frame handling\nI0426 21:09:34.151169 28 log.go:172] (0xc0003e28c0) (1) Data frame sent\nI0426 21:09:34.151207 28 log.go:172] (0xc0003db1e0) (0xc0003e28c0) Stream removed, broadcasting: 1\nI0426 21:09:34.151340 28 log.go:172] (0xc0003db1e0) Go away received\nI0426 21:09:34.151647 28 log.go:172] (0xc0003db1e0) (0xc0003e28c0) Stream removed, broadcasting: 1\nI0426 21:09:34.151672 28 log.go:172] (0xc0003db1e0) (0xc00070e000) Stream removed, broadcasting: 3\nI0426 21:09:34.151684 28 log.go:172] (0xc0003db1e0) (0xc00075c000) Stream removed, broadcasting: 5\n" Apr 26 21:09:34.156: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9440.svc.cluster.local\tcanonical name = externalsvc.services-9440.svc.cluster.local.\nName:\texternalsvc.services-9440.svc.cluster.local\nAddress: 10.96.157.253\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9440, will wait for the garbage collector to delete the pods Apr 26 21:09:34.223: INFO: Deleting ReplicationController externalsvc took: 6.320949ms Apr 26 21:09:34.623: INFO: Terminating ReplicationController externalsvc pods took: 400.259842ms Apr 26 21:09:49.564: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:09:49.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9440" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:28.289 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":21,"skipped":357,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:09:49.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 26 21:09:49.710: INFO: Waiting up to 5m0s for pod "pod-79a726e2-b6c8-43e6-9291-6ff01a50d63c" in namespace "emptydir-7703" to be "success or failure" Apr 26 21:09:49.714: INFO: Pod "pod-79a726e2-b6c8-43e6-9291-6ff01a50d63c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.792963ms Apr 26 21:09:51.718: INFO: Pod "pod-79a726e2-b6c8-43e6-9291-6ff01a50d63c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007709499s Apr 26 21:09:53.721: INFO: Pod "pod-79a726e2-b6c8-43e6-9291-6ff01a50d63c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010721572s STEP: Saw pod success Apr 26 21:09:53.721: INFO: Pod "pod-79a726e2-b6c8-43e6-9291-6ff01a50d63c" satisfied condition "success or failure" Apr 26 21:09:53.723: INFO: Trying to get logs from node jerma-worker2 pod pod-79a726e2-b6c8-43e6-9291-6ff01a50d63c container test-container: STEP: delete the pod Apr 26 21:09:53.736: INFO: Waiting for pod pod-79a726e2-b6c8-43e6-9291-6ff01a50d63c to disappear Apr 26 21:09:53.741: INFO: Pod pod-79a726e2-b6c8-43e6-9291-6ff01a50d63c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:09:53.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7703" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":361,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:09:53.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6999 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6999 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6999 Apr 26 21:09:53.856: INFO: Found 0 stateful pods, waiting for 1 Apr 26 21:10:03.861: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 26 21:10:03.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6999 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 21:10:04.144: INFO: stderr: "I0426 21:10:04.009853 59 log.go:172] (0xc0007c6b00) (0xc0007901e0) Create stream\nI0426 21:10:04.009911 59 log.go:172] (0xc0007c6b00) (0xc0007901e0) Stream added, broadcasting: 1\nI0426 21:10:04.013062 59 log.go:172] (0xc0007c6b00) Reply frame received for 1\nI0426 21:10:04.013106 59 log.go:172] (0xc0007c6b00) (0xc000389900) Create stream\nI0426 21:10:04.013206 59 log.go:172] (0xc0007c6b00) (0xc000389900) Stream added, broadcasting: 3\nI0426 21:10:04.014392 59 log.go:172] (0xc0007c6b00) Reply frame received for 3\nI0426 21:10:04.014436 59 log.go:172] (0xc0007c6b00) (0xc000790320) Create stream\nI0426 21:10:04.014453 59 log.go:172] (0xc0007c6b00) (0xc000790320) Stream added, broadcasting: 5\nI0426 21:10:04.015450 59 log.go:172] (0xc0007c6b00) Reply frame received for 5\nI0426 21:10:04.103451 59 log.go:172] (0xc0007c6b00) Data frame received for 5\nI0426 21:10:04.103498 59 log.go:172] (0xc000790320) (5) Data frame handling\nI0426 21:10:04.103535 59 log.go:172] (0xc000790320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 21:10:04.134434 59 log.go:172] (0xc0007c6b00) Data frame received for 3\nI0426 21:10:04.134459 59 log.go:172] (0xc000389900) (3) Data frame handling\nI0426 21:10:04.134479 59 log.go:172] (0xc000389900) (3) Data frame sent\nI0426 21:10:04.134486 59 log.go:172] (0xc0007c6b00) Data frame received for 3\nI0426 21:10:04.134491 59 log.go:172] (0xc000389900) (3) Data frame handling\nI0426 21:10:04.134620 59 log.go:172] (0xc0007c6b00) Data frame received for 5\nI0426 21:10:04.134658 59 log.go:172] (0xc000790320) (5) Data frame handling\nI0426 21:10:04.137071 59 log.go:172] (0xc0007c6b00) Data frame received for 1\nI0426 21:10:04.137098 59 log.go:172] (0xc0007901e0) (1) Data frame handling\nI0426 21:10:04.137275 59 log.go:172] (0xc0007901e0) (1) Data frame sent\nI0426 21:10:04.137308 59 log.go:172] (0xc0007c6b00) (0xc0007901e0) Stream removed, broadcasting: 1\nI0426 21:10:04.137326 59 log.go:172] (0xc0007c6b00) Go away received\nI0426 21:10:04.137829 59 log.go:172] (0xc0007c6b00) (0xc0007901e0) Stream removed, broadcasting: 1\nI0426 21:10:04.137868 59 log.go:172] (0xc0007c6b00) (0xc000389900) Stream removed, broadcasting: 3\nI0426 21:10:04.137898 59 log.go:172] (0xc0007c6b00) (0xc000790320) Stream removed, broadcasting: 5\n" Apr 26 21:10:04.144: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 21:10:04.144: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 21:10:04.148: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 26 21:10:14.154: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 26 21:10:14.154: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 21:10:14.185: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999395s Apr 26 21:10:15.189: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.976305187s Apr 26 21:10:16.194: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.971614748s Apr 26 21:10:17.199: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.967220859s Apr 26 21:10:18.204: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.962067008s Apr 26 21:10:19.208: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.957311733s Apr 26 21:10:20.213: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.95300717s Apr 26 21:10:21.216: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.948000451s Apr 26 21:10:22.221: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.944977862s Apr 26 21:10:23.226: INFO: Verifying statefulset ss doesn't scale past 1 for another 940.206188ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6999 Apr 26 21:10:24.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6999 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:10:24.462: INFO: stderr: "I0426 21:10:24.367796 79 log.go:172] (0xc0007a8a50) (0xc000762000) Create stream\nI0426 21:10:24.367857 79 log.go:172] (0xc0007a8a50) (0xc000762000) Stream added, broadcasting: 1\nI0426 21:10:24.370027 79 log.go:172] (0xc0007a8a50) Reply frame received for 1\nI0426 21:10:24.370092 79 log.go:172] (0xc0007a8a50) (0xc00068fae0) Create stream\nI0426 21:10:24.370107 79 log.go:172] (0xc0007a8a50) (0xc00068fae0) Stream added, broadcasting: 3\nI0426 21:10:24.370942 79 log.go:172] (0xc0007a8a50) Reply frame received for 3\nI0426 21:10:24.371009 79 log.go:172] (0xc0007a8a50) (0xc0007620a0) Create stream\nI0426 21:10:24.371052 79 log.go:172] (0xc0007a8a50) (0xc0007620a0) Stream added, broadcasting: 5\nI0426 21:10:24.372036 79 log.go:172] (0xc0007a8a50) Reply frame received for 5\nI0426 21:10:24.454744 79 log.go:172] (0xc0007a8a50) Data frame received for 5\nI0426 21:10:24.454790 79 log.go:172] (0xc0007620a0) (5) Data frame handling\nI0426 21:10:24.454805 79 log.go:172] (0xc0007620a0) (5) Data frame sent\nI0426 21:10:24.454829 79 log.go:172] (0xc0007a8a50) Data frame received for 5\nI0426 21:10:24.454852 79 log.go:172] (0xc0007620a0) (5) Data frame handling\nI0426 21:10:24.454887 79 log.go:172] (0xc0007a8a50) Data frame received for 3\nI0426 21:10:24.454905 79 log.go:172] (0xc00068fae0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0426 21:10:24.454918 79 log.go:172] (0xc00068fae0) (3) Data frame sent\nI0426 21:10:24.454980 79 log.go:172] (0xc0007a8a50) Data frame received for 3\nI0426 21:10:24.455003 79 log.go:172] (0xc00068fae0) (3) Data frame handling\nI0426 21:10:24.456961 79 log.go:172] (0xc0007a8a50) Data frame received for 1\nI0426 21:10:24.457001 79 log.go:172] (0xc000762000) (1) Data frame handling\nI0426 21:10:24.457058 79 log.go:172] (0xc000762000) (1) Data frame sent\nI0426 21:10:24.457253 79 log.go:172] (0xc0007a8a50) (0xc000762000) Stream removed, broadcasting: 1\nI0426 21:10:24.457320 79 log.go:172] (0xc0007a8a50) Go away received\nI0426 21:10:24.457774 79 log.go:172] (0xc0007a8a50) (0xc000762000) Stream removed, broadcasting: 1\nI0426 21:10:24.457800 79 log.go:172] (0xc0007a8a50) (0xc00068fae0) Stream removed, broadcasting: 3\nI0426 21:10:24.457811 79 log.go:172] (0xc0007a8a50) (0xc0007620a0) Stream removed, broadcasting: 5\n" Apr 26 21:10:24.462: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 21:10:24.462: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 21:10:24.469: INFO: Found 1 stateful pods, waiting for 3 Apr 26 21:10:34.474: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 21:10:34.474: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 21:10:34.474: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 26 21:10:34.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6999 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 21:10:34.717: INFO: stderr: "I0426 21:10:34.615685 100 log.go:172] (0xc0000f4e70) (0xc000647c20) Create stream\nI0426 21:10:34.615746 100 log.go:172] (0xc0000f4e70) (0xc000647c20) Stream added, broadcasting: 1\nI0426 21:10:34.618363 100 log.go:172] (0xc0000f4e70) Reply frame received for 1\nI0426 21:10:34.618403 100 log.go:172] (0xc0000f4e70) (0xc000647cc0) Create stream\nI0426 21:10:34.618413 100 log.go:172] (0xc0000f4e70) (0xc000647cc0) Stream added, broadcasting: 3\nI0426 21:10:34.619297 100 log.go:172] (0xc0000f4e70) Reply frame received for 3\nI0426 21:10:34.619340 100 log.go:172] (0xc0000f4e70) (0xc0009ee000) Create stream\nI0426 21:10:34.619354 100 log.go:172] (0xc0000f4e70) (0xc0009ee000) Stream added, broadcasting: 5\nI0426 21:10:34.620242 100 log.go:172] (0xc0000f4e70) Reply frame received for 5\nI0426 21:10:34.710062 100 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0426 21:10:34.710112 100 log.go:172] (0xc000647cc0) (3) Data frame handling\nI0426 21:10:34.710144 100 log.go:172] (0xc000647cc0) (3) Data frame sent\nI0426 21:10:34.710160 100 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0426 21:10:34.710175 100 log.go:172] (0xc000647cc0) (3) Data frame handling\nI0426 21:10:34.710203 100 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0426 21:10:34.710228 100 log.go:172] (0xc0009ee000) (5) Data frame handling\nI0426 21:10:34.710253 100 log.go:172] (0xc0009ee000) (5) Data frame sent\nI0426 21:10:34.710272 100 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0426 21:10:34.710291 100 log.go:172] (0xc0009ee000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 21:10:34.711398 100 log.go:172] (0xc0000f4e70) Data frame received for 1\nI0426 21:10:34.711417 100 log.go:172] (0xc000647c20) (1) Data frame handling\nI0426 21:10:34.711430 100 log.go:172] (0xc000647c20) (1) Data frame sent\nI0426 21:10:34.711447 100 log.go:172] (0xc0000f4e70) (0xc000647c20) Stream removed, broadcasting: 1\nI0426 21:10:34.711464 100 log.go:172] (0xc0000f4e70) Go away received\nI0426 21:10:34.711807 100 log.go:172] (0xc0000f4e70) (0xc000647c20) Stream removed, broadcasting: 1\nI0426 21:10:34.711832 100 log.go:172] (0xc0000f4e70) (0xc000647cc0) Stream removed, broadcasting: 3\nI0426 21:10:34.711844 100 log.go:172] (0xc0000f4e70) (0xc0009ee000) Stream removed, broadcasting: 5\n" Apr 26 21:10:34.717: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 21:10:34.717: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 21:10:34.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6999 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 21:10:34.948: INFO: stderr: "I0426 21:10:34.846345 123 log.go:172] (0xc000b84000) (0xc000b6e000) Create stream\nI0426 21:10:34.846397 123 log.go:172] (0xc000b84000) (0xc000b6e000) Stream added, broadcasting: 1\nI0426 21:10:34.848408 123 log.go:172] (0xc000b84000) Reply frame received for 1\nI0426 21:10:34.848431 123 log.go:172] (0xc000b84000) (0xc00052df40) Create stream\nI0426 21:10:34.848456 123 log.go:172] (0xc000b84000) (0xc00052df40) Stream added, broadcasting: 3\nI0426 21:10:34.849358 123 log.go:172] (0xc000b84000) Reply frame received for 3\nI0426 21:10:34.849389 123 log.go:172] (0xc000b84000) (0xc000a56000) Create stream\nI0426 21:10:34.849399 123 log.go:172] (0xc000b84000) (0xc000a56000) Stream added, broadcasting: 5\nI0426 21:10:34.850063 123 log.go:172] (0xc000b84000) Reply frame received for 5\nI0426 21:10:34.913851 123 log.go:172] (0xc000b84000) Data frame received for 5\nI0426 21:10:34.913874 123 log.go:172] (0xc000a56000) (5) Data frame handling\nI0426 21:10:34.913897 123 log.go:172] (0xc000a56000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 21:10:34.942213 123 log.go:172] (0xc000b84000) Data frame received for 3\nI0426 21:10:34.942261 123 log.go:172] (0xc00052df40) (3) Data frame handling\nI0426 21:10:34.942285 123 log.go:172] (0xc00052df40) (3) Data frame sent\nI0426 21:10:34.942297 123 log.go:172] (0xc000b84000) Data frame received for 3\nI0426 21:10:34.942305 123 log.go:172] (0xc00052df40) (3) Data frame handling\nI0426 21:10:34.942456 123 log.go:172] (0xc000b84000) Data frame received for 5\nI0426 21:10:34.942489 123 log.go:172] (0xc000a56000) (5) Data frame handling\nI0426 21:10:34.944175 123 log.go:172] (0xc000b84000) Data frame received for 1\nI0426 21:10:34.944194 123 log.go:172] (0xc000b6e000) (1) Data frame handling\nI0426 21:10:34.944202 123 log.go:172] (0xc000b6e000) (1) Data frame sent\nI0426 21:10:34.944220 123 log.go:172] (0xc000b84000) (0xc000b6e000) Stream removed, broadcasting: 1\nI0426 21:10:34.944247 123 log.go:172] (0xc000b84000) Go away received\nI0426 21:10:34.944633 123 log.go:172] (0xc000b84000) (0xc000b6e000) Stream removed, broadcasting: 1\nI0426 21:10:34.944658 123 log.go:172] (0xc000b84000) (0xc00052df40) Stream removed, broadcasting: 3\nI0426 21:10:34.944681 123 log.go:172] (0xc000b84000) (0xc000a56000) Stream removed, broadcasting: 5\n" Apr 26 21:10:34.948: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 21:10:34.948: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 21:10:34.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6999 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 21:10:35.178: INFO: stderr: "I0426 21:10:35.073390 143 log.go:172] (0xc000994630) (0xc000a72000) Create stream\nI0426 21:10:35.073459 143 log.go:172] (0xc000994630) (0xc000a72000) Stream added, broadcasting: 1\nI0426 21:10:35.076382 143 log.go:172] (0xc000994630) Reply frame received for 1\nI0426 21:10:35.076430 143 log.go:172] (0xc000994630) (0xc000621c20) Create stream\nI0426 21:10:35.076442 143 log.go:172] (0xc000994630) (0xc000621c20) Stream added, broadcasting: 3\nI0426 21:10:35.077524 143 log.go:172] (0xc000994630) Reply frame received for 3\nI0426 21:10:35.077561 143 log.go:172] (0xc000994630) (0xc000a720a0) Create stream\nI0426 21:10:35.077581 143 log.go:172] (0xc000994630) (0xc000a720a0) Stream added, broadcasting: 5\nI0426 21:10:35.078595 143 log.go:172] (0xc000994630) Reply frame received for 5\nI0426 21:10:35.146089 143 log.go:172] (0xc000994630) Data frame received for 5\nI0426 21:10:35.146118 143 log.go:172] (0xc000a720a0) (5) Data frame handling\nI0426 21:10:35.146138 143 log.go:172] (0xc000a720a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 21:10:35.169679 143 log.go:172] (0xc000994630) Data frame received for 3\nI0426 21:10:35.169832 143 log.go:172] (0xc000621c20) (3) Data frame handling\nI0426 21:10:35.169870 143 log.go:172] (0xc000621c20) (3) Data frame sent\nI0426 21:10:35.169905 143 log.go:172] (0xc000994630) Data frame received for 5\nI0426 21:10:35.169928 143 log.go:172] (0xc000a720a0) (5) Data frame handling\nI0426 21:10:35.170256 143 log.go:172] (0xc000994630) Data frame received for 3\nI0426 21:10:35.170290 143 log.go:172] (0xc000621c20) (3) Data frame handling\nI0426 21:10:35.172329 143 log.go:172] (0xc000994630) Data frame received for 1\nI0426 21:10:35.172364 143 log.go:172] (0xc000a72000) (1) Data frame handling\nI0426 21:10:35.172385 143 log.go:172] (0xc000a72000) (1) Data frame sent\nI0426 21:10:35.172409 143 log.go:172] (0xc000994630) (0xc000a72000) Stream removed, broadcasting: 1\nI0426 21:10:35.172505 143 log.go:172] (0xc000994630) Go away received\nI0426 21:10:35.172929 143 log.go:172] (0xc000994630) (0xc000a72000) Stream removed, broadcasting: 1\nI0426 21:10:35.172952 143 log.go:172] (0xc000994630) (0xc000621c20) Stream removed, broadcasting: 3\nI0426 21:10:35.172965 143 log.go:172] (0xc000994630) (0xc000a720a0) Stream removed, broadcasting: 5\n" Apr 26 21:10:35.179: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 21:10:35.179: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 21:10:35.179: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 21:10:35.182: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 26 21:10:45.190: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 26 21:10:45.190: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 26 21:10:45.190: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 26 21:10:45.207: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999285s Apr 26 21:10:46.212: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990997011s Apr 26 21:10:47.217: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985739247s Apr 26 21:10:48.222: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980355292s Apr 26 21:10:49.227: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975235534s Apr 26 21:10:50.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.9702749s Apr 26 21:10:51.236: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966641268s Apr 26 21:10:52.242: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961588082s Apr 26 21:10:53.248: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955646577s Apr 26 21:10:54.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 949.852382ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6999 Apr 26 21:10:55.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6999 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:10:55.485: INFO: stderr: "I0426 21:10:55.397586 164 log.go:172] (0xc000a553f0) (0xc000a66460) Create stream\nI0426 21:10:55.397643 164 log.go:172] (0xc000a553f0) (0xc000a66460) Stream added, broadcasting: 1\nI0426 21:10:55.400228 164 log.go:172] (0xc000a553f0) Reply frame received for 1\nI0426 21:10:55.400507 164 log.go:172] (0xc000a553f0) (0xc000a440a0) Create stream\nI0426 21:10:55.400545 164 log.go:172] (0xc000a553f0) (0xc000a440a0) Stream added, broadcasting: 3\nI0426 21:10:55.402668 164 log.go:172] (0xc000a553f0) Reply frame received for 3\nI0426 21:10:55.402719 164 log.go:172] (0xc000a553f0) (0xc0007f92c0) Create stream\nI0426 21:10:55.402736 164 log.go:172] (0xc000a553f0) (0xc0007f92c0) Stream added, broadcasting: 5\nI0426 21:10:55.403931 164 log.go:172] (0xc000a553f0) Reply frame received for 5\nI0426 21:10:55.477515 164 log.go:172] (0xc000a553f0) Data frame received for 5\nI0426 21:10:55.477562 164 log.go:172] (0xc0007f92c0) (5) Data frame handling\nI0426 21:10:55.477584 164 log.go:172] (0xc0007f92c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0426 21:10:55.477615 164 log.go:172] (0xc000a553f0) Data frame received for 3\nI0426 21:10:55.477668 164 log.go:172] (0xc000a440a0) (3) Data frame handling\nI0426 21:10:55.477709 164 log.go:172] (0xc000a440a0) (3) Data frame sent\nI0426 21:10:55.477729 164 log.go:172] (0xc000a553f0) Data frame received for 3\nI0426 21:10:55.477746 164 log.go:172] (0xc000a440a0) (3) Data frame handling\nI0426 21:10:55.477789 164 log.go:172] (0xc000a553f0) Data frame received for 5\nI0426 21:10:55.477844 164 log.go:172] (0xc0007f92c0) (5) Data frame handling\nI0426 21:10:55.479343 164 log.go:172] (0xc000a553f0) Data frame received for 1\nI0426 21:10:55.479374 164 log.go:172] (0xc000a66460) (1) Data frame handling\nI0426 21:10:55.479397 164 log.go:172] (0xc000a66460) (1) Data frame sent\nI0426 21:10:55.479420 164 log.go:172] (0xc000a553f0) (0xc000a66460) Stream removed, broadcasting: 1\nI0426 21:10:55.479461 164 log.go:172] (0xc000a553f0) Go away received\nI0426 21:10:55.479897 164 log.go:172] (0xc000a553f0) (0xc000a66460) Stream removed, broadcasting: 1\nI0426 21:10:55.479924 164 log.go:172] (0xc000a553f0) (0xc000a440a0) Stream removed, broadcasting: 3\nI0426 21:10:55.479938 164 log.go:172] (0xc000a553f0) (0xc0007f92c0) Stream removed, broadcasting: 5\n" Apr 26 21:10:55.485: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 21:10:55.485: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 21:10:55.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6999 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:10:55.707: INFO: stderr: "I0426 21:10:55.616979 184 log.go:172] (0xc000535130) (0xc00062a000) Create stream\nI0426 21:10:55.617029 184 log.go:172] (0xc000535130) (0xc00062a000) Stream added, broadcasting: 1\nI0426 21:10:55.619505 184 log.go:172] (0xc000535130) Reply frame received for 1\nI0426 21:10:55.619550 184 log.go:172] (0xc000535130) (0xc00066b9a0) Create stream\nI0426 21:10:55.619570 184 log.go:172] (0xc000535130) (0xc00066b9a0) Stream added, broadcasting: 3\nI0426 21:10:55.620705 184 log.go:172] (0xc000535130) Reply frame received for 3\nI0426 21:10:55.620739 184 log.go:172] (0xc000535130) (0xc000718000) Create stream\nI0426 21:10:55.620761 184 log.go:172] (0xc000535130) (0xc000718000) Stream added, broadcasting: 5\nI0426 21:10:55.621784 184 log.go:172] (0xc000535130) Reply frame received for 5\nI0426 21:10:55.698371 184 log.go:172] (0xc000535130) Data frame received for 5\nI0426 21:10:55.698429 184 log.go:172] (0xc000718000) (5) Data frame handling\nI0426 21:10:55.698451 184 log.go:172] (0xc000535130) Data frame received for 3\nI0426 21:10:55.698483 184 log.go:172] (0xc000718000) (5) Data frame sent\nI0426 21:10:55.698515 184 log.go:172] (0xc000535130) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0426 21:10:55.698537 184 log.go:172] (0xc000718000) (5) Data frame handling\nI0426 21:10:55.698572 184 log.go:172] (0xc00066b9a0) (3) Data frame handling\nI0426 21:10:55.698682 184 log.go:172] (0xc00066b9a0) (3) Data frame sent\nI0426 21:10:55.698715 184 log.go:172] (0xc000535130) Data frame received for 3\nI0426 21:10:55.698733 184 log.go:172] (0xc00066b9a0) (3) Data frame handling\nI0426 21:10:55.700086 184 log.go:172] (0xc000535130) Data frame received for 1\nI0426 21:10:55.700123 184 log.go:172] (0xc00062a000) (1) Data frame handling\nI0426 21:10:55.700158 184 log.go:172] (0xc00062a000) (1) Data frame sent\nI0426 21:10:55.700190 184 log.go:172] (0xc000535130) (0xc00062a000) Stream removed, broadcasting: 1\nI0426 21:10:55.700599 184 log.go:172] (0xc000535130) (0xc00062a000) Stream removed, broadcasting: 1\nI0426 21:10:55.700637 184 log.go:172] (0xc000535130) (0xc00066b9a0) Stream removed, broadcasting: 3\nI0426 21:10:55.700817 184 log.go:172] (0xc000535130) (0xc000718000) Stream removed, broadcasting: 5\n" Apr 26 21:10:55.707: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 21:10:55.707: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 21:10:55.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6999 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:10:55.917: INFO: stderr: "I0426 21:10:55.837834 206 log.go:172] (0xc000b0a000) (0xc000263680) Create stream\nI0426 21:10:55.837902 206 log.go:172] (0xc000b0a000) (0xc000263680) Stream added, broadcasting: 1\nI0426 21:10:55.840856 206 log.go:172] (0xc000b0a000) Reply frame received for 1\nI0426 21:10:55.840913 206 log.go:172] (0xc000b0a000) (0xc0008fe000) Create stream\nI0426 21:10:55.840941 206 log.go:172] (0xc000b0a000) (0xc0008fe000) Stream added, broadcasting: 3\nI0426 21:10:55.842088 206 log.go:172] (0xc000b0a000) Reply frame received for 3\nI0426 21:10:55.842114 206 log.go:172] (0xc000b0a000) (0xc0008fe0a0) Create stream\nI0426 21:10:55.842122 206 log.go:172] (0xc000b0a000) (0xc0008fe0a0) Stream added, broadcasting: 5\nI0426 21:10:55.842935 206 log.go:172] (0xc000b0a000) Reply frame received for 5\nI0426 21:10:55.910226 206 log.go:172] (0xc000b0a000) Data frame received for 3\nI0426 21:10:55.910274 206 log.go:172] (0xc000b0a000) Data frame received for 5\nI0426 21:10:55.910324 206 log.go:172] (0xc0008fe0a0) (5) Data frame handling\nI0426 21:10:55.910344 206 log.go:172] (0xc0008fe0a0) (5) Data frame sent\nI0426 21:10:55.910355 206 log.go:172] (0xc000b0a000) Data frame received for 5\nI0426 21:10:55.910363 206 log.go:172] (0xc0008fe0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0426 21:10:55.910400 206 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0426 21:10:55.910440 206 log.go:172] (0xc0008fe000) (3) Data frame sent\nI0426 21:10:55.910456 206 log.go:172] (0xc000b0a000) Data frame received for 3\nI0426 21:10:55.910468 206 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0426 21:10:55.911915 206 log.go:172] (0xc000b0a000) Data frame received for 1\nI0426 21:10:55.911943 206 log.go:172] (0xc000263680) (1) Data frame handling\nI0426 21:10:55.911978 206 log.go:172] (0xc000263680) (1) Data frame sent\nI0426 21:10:55.912002 206 log.go:172] (0xc000b0a000) (0xc000263680) Stream removed, broadcasting: 1\nI0426 21:10:55.912070 206 log.go:172] (0xc000b0a000) Go away received\nI0426 21:10:55.912470 206 log.go:172] (0xc000b0a000) (0xc000263680) Stream removed, broadcasting: 1\nI0426 21:10:55.912500 206 log.go:172] (0xc000b0a000) (0xc0008fe000) Stream removed, broadcasting: 3\nI0426 21:10:55.912515 206 log.go:172] (0xc000b0a000) (0xc0008fe0a0) Stream removed, broadcasting: 5\n" Apr 26 21:10:55.918: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 21:10:55.918: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 21:10:55.918: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 26 21:11:25.934: INFO: Deleting all statefulset in ns statefulset-6999 Apr 26 21:11:25.937: INFO: Scaling statefulset ss to 0 Apr 26 21:11:25.947: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 21:11:25.950: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:11:25.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6999" for this suite. • [SLOW TEST:92.233 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":23,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:11:25.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-7029 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7029 to expose endpoints map[] Apr 26 21:11:26.178: INFO: Get endpoints failed (21.003051ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 26 21:11:27.181: INFO: successfully validated that service endpoint-test2 in namespace services-7029 exposes endpoints map[] (1.024226267s elapsed) STEP: Creating pod pod1 in namespace services-7029 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7029 to expose endpoints map[pod1:[80]] Apr 26 21:11:30.242: INFO: successfully validated that service endpoint-test2 in namespace services-7029 exposes endpoints map[pod1:[80]] (3.054803715s elapsed) STEP: Creating pod pod2 in namespace services-7029 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7029 to expose endpoints map[pod1:[80] pod2:[80]] Apr 26 21:11:34.474: INFO: successfully validated that service endpoint-test2 in namespace services-7029 exposes endpoints map[pod1:[80] pod2:[80]] (4.227445298s elapsed) STEP: Deleting pod pod1 in namespace services-7029 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7029 to expose endpoints map[pod2:[80]] Apr 26 21:11:35.522: INFO: successfully validated that service endpoint-test2 in namespace services-7029 exposes endpoints map[pod2:[80]] (1.04359027s elapsed) STEP: Deleting pod pod2 in namespace services-7029 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7029 to expose endpoints map[] Apr 26 21:11:36.534: INFO: successfully validated that service endpoint-test2 in namespace services-7029 exposes endpoints map[] (1.007453876s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:11:36.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7029" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.591 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":24,"skipped":438,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:11:36.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 26 21:11:41.164: INFO: Successfully updated pod "adopt-release-cczsw" STEP: Checking that the Job readopts the Pod Apr 26 21:11:41.164: INFO: Waiting up to 15m0s for pod "adopt-release-cczsw" in namespace "job-8189" to be "adopted" Apr 26 21:11:41.180: INFO: Pod "adopt-release-cczsw": Phase="Running", Reason="", readiness=true. Elapsed: 16.139972ms Apr 26 21:11:43.185: INFO: Pod "adopt-release-cczsw": Phase="Running", Reason="", readiness=true. Elapsed: 2.020272051s Apr 26 21:11:43.185: INFO: Pod "adopt-release-cczsw" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 26 21:11:43.694: INFO: Successfully updated pod "adopt-release-cczsw" STEP: Checking that the Job releases the Pod Apr 26 21:11:43.695: INFO: Waiting up to 15m0s for pod "adopt-release-cczsw" in namespace "job-8189" to be "released" Apr 26 21:11:43.715: INFO: Pod "adopt-release-cczsw": Phase="Running", Reason="", readiness=true. Elapsed: 20.68927ms Apr 26 21:11:45.719: INFO: Pod "adopt-release-cczsw": Phase="Running", Reason="", readiness=true. Elapsed: 2.024310177s Apr 26 21:11:45.719: INFO: Pod "adopt-release-cczsw" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:11:45.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8189" for this suite. • [SLOW TEST:9.153 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":25,"skipped":450,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:11:45.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-81508807-36f2-4bff-aa89-e89cacfb684d in namespace container-probe-3837 Apr 26 21:11:49.820: INFO: Started pod liveness-81508807-36f2-4bff-aa89-e89cacfb684d in namespace container-probe-3837 STEP: checking the pod's current state and verifying that restartCount is present Apr 26 21:11:49.823: INFO: Initial restart count of pod liveness-81508807-36f2-4bff-aa89-e89cacfb684d is 0 Apr 26 21:12:07.863: INFO: Restart count of pod container-probe-3837/liveness-81508807-36f2-4bff-aa89-e89cacfb684d is now 1 (18.039866991s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:12:07.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3837" for this suite. • [SLOW TEST:22.185 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":463,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:12:07.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-3bb24231-6bcb-4d4c-a3f6-5c053902c458 STEP: Creating a pod to test consume configMaps Apr 26 21:12:07.992: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e32e9772-d1ea-4a6e-ada0-415b853e3989" in namespace "projected-6360" to be "success or failure" Apr 26 21:12:08.007: INFO: Pod "pod-projected-configmaps-e32e9772-d1ea-4a6e-ada0-415b853e3989": Phase="Pending", Reason="", readiness=false. Elapsed: 15.034844ms Apr 26 21:12:10.011: INFO: Pod "pod-projected-configmaps-e32e9772-d1ea-4a6e-ada0-415b853e3989": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01895015s Apr 26 21:12:12.015: INFO: Pod "pod-projected-configmaps-e32e9772-d1ea-4a6e-ada0-415b853e3989": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023132145s STEP: Saw pod success Apr 26 21:12:12.015: INFO: Pod "pod-projected-configmaps-e32e9772-d1ea-4a6e-ada0-415b853e3989" satisfied condition "success or failure" Apr 26 21:12:12.018: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e32e9772-d1ea-4a6e-ada0-415b853e3989 container projected-configmap-volume-test: STEP: delete the pod Apr 26 21:12:12.051: INFO: Waiting for pod pod-projected-configmaps-e32e9772-d1ea-4a6e-ada0-415b853e3989 to disappear Apr 26 21:12:12.055: INFO: Pod pod-projected-configmaps-e32e9772-d1ea-4a6e-ada0-415b853e3989 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:12:12.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6360" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:12:12.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:12:12.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9c3c931-2a1d-4f9a-aa32-e4856d79f2f6" in namespace "downward-api-979" to be "success or failure" Apr 26 21:12:12.153: INFO: Pod "downwardapi-volume-c9c3c931-2a1d-4f9a-aa32-e4856d79f2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.636015ms Apr 26 21:12:14.162: INFO: Pod "downwardapi-volume-c9c3c931-2a1d-4f9a-aa32-e4856d79f2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026867622s Apr 26 21:12:16.167: INFO: Pod "downwardapi-volume-c9c3c931-2a1d-4f9a-aa32-e4856d79f2f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03134993s STEP: Saw pod success Apr 26 21:12:16.167: INFO: Pod "downwardapi-volume-c9c3c931-2a1d-4f9a-aa32-e4856d79f2f6" satisfied condition "success or failure" Apr 26 21:12:16.172: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c9c3c931-2a1d-4f9a-aa32-e4856d79f2f6 container client-container: STEP: delete the pod Apr 26 21:12:16.203: INFO: Waiting for pod downwardapi-volume-c9c3c931-2a1d-4f9a-aa32-e4856d79f2f6 to disappear Apr 26 21:12:16.235: INFO: Pod downwardapi-volume-c9c3c931-2a1d-4f9a-aa32-e4856d79f2f6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:12:16.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-979" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:12:16.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-1023ee81-661a-47a5-bfdb-cf7eaf6bf605 STEP: Creating a pod to test consume configMaps Apr 26 21:12:16.306: INFO: Waiting up to 5m0s for pod "pod-configmaps-dfd544b6-496c-4f83-8d9e-00c2902b778a" in namespace "configmap-6111" to be "success or failure" Apr 26 21:12:16.310: INFO: Pod "pod-configmaps-dfd544b6-496c-4f83-8d9e-00c2902b778a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.797208ms Apr 26 21:12:18.314: INFO: Pod "pod-configmaps-dfd544b6-496c-4f83-8d9e-00c2902b778a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007993407s Apr 26 21:12:20.318: INFO: Pod "pod-configmaps-dfd544b6-496c-4f83-8d9e-00c2902b778a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012132316s STEP: Saw pod success Apr 26 21:12:20.318: INFO: Pod "pod-configmaps-dfd544b6-496c-4f83-8d9e-00c2902b778a" satisfied condition "success or failure" Apr 26 21:12:20.321: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-dfd544b6-496c-4f83-8d9e-00c2902b778a container configmap-volume-test: STEP: delete the pod Apr 26 21:12:20.371: INFO: Waiting for pod pod-configmaps-dfd544b6-496c-4f83-8d9e-00c2902b778a to disappear Apr 26 21:12:20.382: INFO: Pod pod-configmaps-dfd544b6-496c-4f83-8d9e-00c2902b778a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:12:20.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6111" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:12:20.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 26 21:12:21.182: INFO: Pod name wrapped-volume-race-141182d1-cbb5-4175-910e-4a01c736390a: Found 0 pods out of 5 Apr 26 21:12:26.188: INFO: Pod name wrapped-volume-race-141182d1-cbb5-4175-910e-4a01c736390a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-141182d1-cbb5-4175-910e-4a01c736390a in namespace emptydir-wrapper-2126, will wait for the garbage collector to delete the pods Apr 26 21:12:40.310: INFO: Deleting ReplicationController wrapped-volume-race-141182d1-cbb5-4175-910e-4a01c736390a took: 47.744744ms Apr 26 21:12:40.711: INFO: Terminating ReplicationController wrapped-volume-race-141182d1-cbb5-4175-910e-4a01c736390a pods took: 400.248444ms STEP: Creating RC which spawns configmap-volume pods Apr 26 21:12:50.341: INFO: Pod name wrapped-volume-race-c8777d83-29f1-4516-aba2-0e91ccb4b653: Found 0 pods out of 5 Apr 26 21:12:55.349: INFO: Pod name wrapped-volume-race-c8777d83-29f1-4516-aba2-0e91ccb4b653: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c8777d83-29f1-4516-aba2-0e91ccb4b653 in namespace emptydir-wrapper-2126, will wait for the garbage collector to delete the pods Apr 26 21:13:09.437: INFO: Deleting ReplicationController wrapped-volume-race-c8777d83-29f1-4516-aba2-0e91ccb4b653 took: 9.857488ms Apr 26 21:13:09.838: INFO: Terminating ReplicationController wrapped-volume-race-c8777d83-29f1-4516-aba2-0e91ccb4b653 pods took: 400.326528ms STEP: Creating RC which spawns configmap-volume pods Apr 26 21:13:20.578: INFO: Pod name wrapped-volume-race-8a7e4f80-5dfc-4b97-9bfd-d623ebdc3d9f: Found 0 pods out of 5 Apr 26 21:13:25.588: INFO: Pod name wrapped-volume-race-8a7e4f80-5dfc-4b97-9bfd-d623ebdc3d9f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8a7e4f80-5dfc-4b97-9bfd-d623ebdc3d9f in namespace emptydir-wrapper-2126, will wait for the garbage collector to delete the pods Apr 26 21:13:39.702: INFO: Deleting ReplicationController wrapped-volume-race-8a7e4f80-5dfc-4b97-9bfd-d623ebdc3d9f took: 8.851309ms Apr 26 21:13:40.102: INFO: Terminating ReplicationController wrapped-volume-race-8a7e4f80-5dfc-4b97-9bfd-d623ebdc3d9f pods took: 400.277431ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:13:50.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2126" for this suite. • [SLOW TEST:90.453 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":30,"skipped":577,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:13:50.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-1597 STEP: creating replication controller nodeport-test in namespace services-1597 I0426 21:13:50.975731 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1597, replica count: 2 I0426 21:13:54.026115 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 21:13:57.026340 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 26 21:13:57.026: INFO: Creating new exec pod Apr 26 21:14:02.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1597 execpodzvrhd -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 26 21:14:02.686: INFO: stderr: "I0426 21:14:02.580322 226 log.go:172] (0xc00058cd10) (0xc000629b80) Create stream\nI0426 21:14:02.580382 226 log.go:172] (0xc00058cd10) (0xc000629b80) Stream added, broadcasting: 1\nI0426 21:14:02.582902 226 log.go:172] (0xc00058cd10) Reply frame received for 1\nI0426 21:14:02.582951 226 log.go:172] (0xc00058cd10) (0xc00090c000) Create stream\nI0426 21:14:02.582976 226 log.go:172] (0xc00058cd10) (0xc00090c000) Stream added, broadcasting: 3\nI0426 21:14:02.583978 226 log.go:172] (0xc00058cd10) Reply frame received for 3\nI0426 21:14:02.584045 226 log.go:172] (0xc00058cd10) (0xc0007cc780) Create stream\nI0426 21:14:02.584074 226 log.go:172] (0xc00058cd10) (0xc0007cc780) Stream added, broadcasting: 5\nI0426 21:14:02.585262 226 log.go:172] (0xc00058cd10) Reply frame received for 5\nI0426 21:14:02.678649 226 log.go:172] (0xc00058cd10) Data frame received for 3\nI0426 21:14:02.678687 226 log.go:172] (0xc00090c000) (3) Data frame handling\nI0426 21:14:02.678896 226 log.go:172] (0xc00058cd10) Data frame received for 5\nI0426 21:14:02.678922 226 log.go:172] (0xc0007cc780) (5) Data frame handling\nI0426 21:14:02.678941 226 log.go:172] (0xc0007cc780) (5) Data frame sent\nI0426 21:14:02.678949 226 log.go:172] (0xc00058cd10) Data frame received for 5\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0426 21:14:02.678965 226 log.go:172] (0xc0007cc780) (5) Data frame handling\nI0426 21:14:02.681106 226 log.go:172] (0xc00058cd10) Data frame received for 1\nI0426 21:14:02.681271 226 log.go:172] (0xc000629b80) (1) Data frame handling\nI0426 21:14:02.681288 226 log.go:172] (0xc000629b80) (1) Data frame sent\nI0426 21:14:02.681301 226 log.go:172] (0xc00058cd10) (0xc000629b80) Stream removed, broadcasting: 1\nI0426 21:14:02.681315 226 log.go:172] (0xc00058cd10) Go away received\nI0426 21:14:02.681768 226 log.go:172] (0xc00058cd10) (0xc000629b80) Stream removed, broadcasting: 1\nI0426 21:14:02.681800 226 log.go:172] (0xc00058cd10) (0xc00090c000) Stream removed, broadcasting: 3\nI0426 21:14:02.681825 226 log.go:172] (0xc00058cd10) (0xc0007cc780) Stream removed, broadcasting: 5\n" Apr 26 21:14:02.686: INFO: stdout: "" Apr 26 21:14:02.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1597 execpodzvrhd -- /bin/sh -x -c nc -zv -t -w 2 10.109.179.95 80' Apr 26 21:14:02.890: INFO: stderr: "I0426 21:14:02.815691 248 log.go:172] (0xc000aa2370) (0xc000ade1e0) Create stream\nI0426 21:14:02.815750 248 log.go:172] (0xc000aa2370) (0xc000ade1e0) Stream added, broadcasting: 1\nI0426 21:14:02.819657 248 log.go:172] (0xc000aa2370) Reply frame received for 1\nI0426 21:14:02.819697 248 log.go:172] (0xc000aa2370) (0xc0005a05a0) Create stream\nI0426 21:14:02.819706 248 log.go:172] (0xc000aa2370) (0xc0005a05a0) Stream added, broadcasting: 3\nI0426 21:14:02.820754 248 log.go:172] (0xc000aa2370) Reply frame received for 3\nI0426 21:14:02.820810 248 log.go:172] (0xc000aa2370) (0xc000759400) Create stream\nI0426 21:14:02.820829 248 log.go:172] (0xc000aa2370) (0xc000759400) Stream added, broadcasting: 5\nI0426 21:14:02.821971 248 log.go:172] (0xc000aa2370) Reply frame received for 5\nI0426 21:14:02.883737 248 log.go:172] (0xc000aa2370) Data frame received for 5\nI0426 21:14:02.883790 248 log.go:172] (0xc000aa2370) Data frame received for 3\nI0426 21:14:02.883814 248 log.go:172] (0xc0005a05a0) (3) Data frame handling\nI0426 21:14:02.883837 248 log.go:172] (0xc000759400) (5) Data frame handling\nI0426 21:14:02.883849 248 log.go:172] (0xc000759400) (5) Data frame sent\n+ nc -zv -t -w 2 10.109.179.95 80\nConnection to 10.109.179.95 80 port [tcp/http] succeeded!\nI0426 21:14:02.884032 248 log.go:172] (0xc000aa2370) Data frame received for 5\nI0426 21:14:02.884042 248 log.go:172] (0xc000759400) (5) Data frame handling\nI0426 21:14:02.885828 248 log.go:172] (0xc000aa2370) Data frame received for 1\nI0426 21:14:02.885843 248 log.go:172] (0xc000ade1e0) (1) Data frame handling\nI0426 21:14:02.885856 248 log.go:172] (0xc000ade1e0) (1) Data frame sent\nI0426 21:14:02.885867 248 log.go:172] (0xc000aa2370) (0xc000ade1e0) Stream removed, broadcasting: 1\nI0426 21:14:02.885906 248 log.go:172] (0xc000aa2370) Go away received\nI0426 21:14:02.886161 248 log.go:172] (0xc000aa2370) (0xc000ade1e0) Stream removed, broadcasting: 1\nI0426 21:14:02.886179 248 log.go:172] (0xc000aa2370) (0xc0005a05a0) Stream removed, broadcasting: 3\nI0426 21:14:02.886186 248 log.go:172] (0xc000aa2370) (0xc000759400) Stream removed, broadcasting: 5\n" Apr 26 21:14:02.891: INFO: stdout: "" Apr 26 21:14:02.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1597 execpodzvrhd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31283' Apr 26 21:14:03.093: INFO: stderr: "I0426 21:14:03.014477 270 log.go:172] (0xc000bd0420) (0xc000a04000) Create stream\nI0426 21:14:03.014555 270 log.go:172] (0xc000bd0420) (0xc000a04000) Stream added, broadcasting: 1\nI0426 21:14:03.026502 270 log.go:172] (0xc000bd0420) Reply frame received for 1\nI0426 21:14:03.029006 270 log.go:172] (0xc000bd0420) (0xc000a70000) Create stream\nI0426 21:14:03.029038 270 log.go:172] (0xc000bd0420) (0xc000a70000) Stream added, broadcasting: 3\nI0426 21:14:03.030791 270 log.go:172] (0xc000bd0420) Reply frame received for 3\nI0426 21:14:03.030821 270 log.go:172] (0xc000bd0420) (0xc000163400) Create stream\nI0426 21:14:03.030832 270 log.go:172] (0xc000bd0420) (0xc000163400) Stream added, broadcasting: 5\nI0426 21:14:03.033103 270 log.go:172] (0xc000bd0420) Reply frame received for 5\nI0426 21:14:03.085010 270 log.go:172] (0xc000bd0420) Data frame received for 5\nI0426 21:14:03.085034 270 log.go:172] (0xc000163400) (5) Data frame handling\nI0426 21:14:03.085041 270 log.go:172] (0xc000163400) (5) Data frame sent\nI0426 21:14:03.085046 270 log.go:172] (0xc000bd0420) Data frame received for 5\nI0426 21:14:03.085051 270 log.go:172] (0xc000163400) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31283\nConnection to 172.17.0.10 31283 port [tcp/31283] succeeded!\nI0426 21:14:03.085084 270 log.go:172] (0xc000bd0420) Data frame received for 3\nI0426 21:14:03.085250 270 log.go:172] (0xc000a70000) (3) Data frame handling\nI0426 21:14:03.086971 270 log.go:172] (0xc000bd0420) Data frame received for 1\nI0426 21:14:03.087008 270 log.go:172] (0xc000a04000) (1) Data frame handling\nI0426 21:14:03.087043 270 log.go:172] (0xc000a04000) (1) Data frame sent\nI0426 21:14:03.087292 270 log.go:172] (0xc000bd0420) (0xc000a04000) Stream removed, broadcasting: 1\nI0426 21:14:03.087336 270 log.go:172] (0xc000bd0420) Go away received\nI0426 21:14:03.087690 270 log.go:172] (0xc000bd0420) (0xc000a04000) Stream removed, broadcasting: 1\nI0426 21:14:03.087711 270 log.go:172] (0xc000bd0420) (0xc000a70000) Stream removed, broadcasting: 3\nI0426 21:14:03.087721 270 log.go:172] (0xc000bd0420) (0xc000163400) Stream removed, broadcasting: 5\n" Apr 26 21:14:03.093: INFO: stdout: "" Apr 26 21:14:03.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1597 execpodzvrhd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31283' Apr 26 21:14:03.294: INFO: stderr: "I0426 21:14:03.223558 290 log.go:172] (0xc0006eea50) (0xc00087a1e0) Create stream\nI0426 21:14:03.223618 290 log.go:172] (0xc0006eea50) (0xc00087a1e0) Stream added, broadcasting: 1\nI0426 21:14:03.226100 290 log.go:172] (0xc0006eea50) Reply frame received for 1\nI0426 21:14:03.226152 290 log.go:172] (0xc0006eea50) (0xc00087a280) Create stream\nI0426 21:14:03.226172 290 log.go:172] (0xc0006eea50) (0xc00087a280) Stream added, broadcasting: 3\nI0426 21:14:03.227015 290 log.go:172] (0xc0006eea50) Reply frame received for 3\nI0426 21:14:03.227039 290 log.go:172] (0xc0006eea50) (0xc0005a39a0) Create stream\nI0426 21:14:03.227046 290 log.go:172] (0xc0006eea50) (0xc0005a39a0) Stream added, broadcasting: 5\nI0426 21:14:03.227756 290 log.go:172] (0xc0006eea50) Reply frame received for 5\nI0426 21:14:03.285757 290 log.go:172] (0xc0006eea50) Data frame received for 5\nI0426 21:14:03.285810 290 log.go:172] (0xc0005a39a0) (5) Data frame handling\nI0426 21:14:03.285832 290 log.go:172] (0xc0005a39a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 31283\nI0426 21:14:03.286166 290 log.go:172] (0xc0006eea50) Data frame received for 3\nI0426 21:14:03.286186 290 log.go:172] (0xc00087a280) (3) Data frame handling\nI0426 21:14:03.286212 290 log.go:172] (0xc0006eea50) Data frame received for 5\nI0426 21:14:03.286245 290 log.go:172] (0xc0005a39a0) (5) Data frame handling\nI0426 21:14:03.286277 290 log.go:172] (0xc0005a39a0) (5) Data frame sent\nConnection to 172.17.0.8 31283 port [tcp/31283] succeeded!\nI0426 21:14:03.286565 290 log.go:172] (0xc0006eea50) Data frame received for 5\nI0426 21:14:03.286604 290 log.go:172] (0xc0005a39a0) (5) Data frame handling\nI0426 21:14:03.288144 290 log.go:172] (0xc0006eea50) Data frame received for 1\nI0426 21:14:03.288159 290 log.go:172] (0xc00087a1e0) (1) Data frame handling\nI0426 21:14:03.288176 290 log.go:172] (0xc00087a1e0) (1) Data frame sent\nI0426 21:14:03.288192 290 log.go:172] (0xc0006eea50) (0xc00087a1e0) Stream removed, broadcasting: 1\nI0426 21:14:03.288288 290 log.go:172] (0xc0006eea50) Go away received\nI0426 21:14:03.288648 290 log.go:172] (0xc0006eea50) (0xc00087a1e0) Stream removed, broadcasting: 1\nI0426 21:14:03.288674 290 log.go:172] (0xc0006eea50) (0xc00087a280) Stream removed, broadcasting: 3\nI0426 21:14:03.288693 290 log.go:172] (0xc0006eea50) (0xc0005a39a0) Stream removed, broadcasting: 5\n" Apr 26 21:14:03.294: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:14:03.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1597" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.460 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":31,"skipped":588,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:14:03.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:14:07.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8835" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":594,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:14:07.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-e2baa1bd-d97d-442e-8686-b61209ad3c0e STEP: Creating a pod to test consume secrets Apr 26 21:14:07.492: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-91584746-56fd-4acc-9cd8-f0f269992848" in namespace "projected-1277" to be "success or failure" Apr 26 21:14:07.509: INFO: Pod "pod-projected-secrets-91584746-56fd-4acc-9cd8-f0f269992848": Phase="Pending", Reason="", readiness=false. Elapsed: 17.781552ms Apr 26 21:14:09.541: INFO: Pod "pod-projected-secrets-91584746-56fd-4acc-9cd8-f0f269992848": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049813864s Apr 26 21:14:11.546: INFO: Pod "pod-projected-secrets-91584746-56fd-4acc-9cd8-f0f269992848": Phase="Running", Reason="", readiness=true. Elapsed: 4.054140238s Apr 26 21:14:13.550: INFO: Pod "pod-projected-secrets-91584746-56fd-4acc-9cd8-f0f269992848": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058189295s STEP: Saw pod success Apr 26 21:14:13.550: INFO: Pod "pod-projected-secrets-91584746-56fd-4acc-9cd8-f0f269992848" satisfied condition "success or failure" Apr 26 21:14:13.553: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-91584746-56fd-4acc-9cd8-f0f269992848 container projected-secret-volume-test: STEP: delete the pod Apr 26 21:14:13.584: INFO: Waiting for pod pod-projected-secrets-91584746-56fd-4acc-9cd8-f0f269992848 to disappear Apr 26 21:14:13.588: INFO: Pod pod-projected-secrets-91584746-56fd-4acc-9cd8-f0f269992848 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:14:13.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1277" for this suite. • [SLOW TEST:6.202 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:14:13.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:14:13.716: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e9b4e7e9-23ee-4995-8ac5-108a1dd4ac27" in namespace "security-context-test-9239" to be "success or failure" Apr 26 21:14:13.735: INFO: Pod "busybox-privileged-false-e9b4e7e9-23ee-4995-8ac5-108a1dd4ac27": Phase="Pending", Reason="", readiness=false. Elapsed: 19.389896ms Apr 26 21:14:15.739: INFO: Pod "busybox-privileged-false-e9b4e7e9-23ee-4995-8ac5-108a1dd4ac27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023149676s Apr 26 21:14:17.741: INFO: Pod "busybox-privileged-false-e9b4e7e9-23ee-4995-8ac5-108a1dd4ac27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025516329s Apr 26 21:14:17.741: INFO: Pod "busybox-privileged-false-e9b4e7e9-23ee-4995-8ac5-108a1dd4ac27" satisfied condition "success or failure" Apr 26 21:14:17.746: INFO: Got logs for pod "busybox-privileged-false-e9b4e7e9-23ee-4995-8ac5-108a1dd4ac27": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:14:17.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9239" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":635,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:14:17.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 26 21:14:17.818: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:14:24.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5066" for this suite. • [SLOW TEST:7.287 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":35,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:14:25.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-62f1dc76-9119-4342-900b-018c3ab72e2e in namespace container-probe-9903 Apr 26 21:14:29.107: INFO: Started pod busybox-62f1dc76-9119-4342-900b-018c3ab72e2e in namespace container-probe-9903 STEP: checking the pod's current state and verifying that restartCount is present Apr 26 21:14:29.110: INFO: Initial restart count of pod busybox-62f1dc76-9119-4342-900b-018c3ab72e2e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:18:29.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9903" for this suite. • [SLOW TEST:244.679 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":662,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:18:29.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:18:29.795: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:18:36.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7437" for this suite. • [SLOW TEST:6.520 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":37,"skipped":665,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:18:36.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 26 21:18:36.300: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 26 21:18:36.314: INFO: Waiting for terminating namespaces to be deleted... Apr 26 21:18:36.316: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 26 21:18:36.332: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:18:36.332: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 21:18:36.332: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:18:36.332: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 21:18:36.332: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 26 21:18:36.349: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:18:36.349: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 21:18:36.349: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 26 21:18:36.349: INFO: Container kube-bench ready: false, restart count 0 Apr 26 21:18:36.349: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:18:36.349: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 21:18:36.349: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 26 21:18:36.349: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-368ffc8a-12bc-47d8-b494-7eb18c7b9ec4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-368ffc8a-12bc-47d8-b494-7eb18c7b9ec4 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-368ffc8a-12bc-47d8-b494-7eb18c7b9ec4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:18:44.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4672" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.291 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":38,"skipped":679,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:18:44.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 26 21:18:49.198: INFO: Successfully updated pod "labelsupdate0b6e3661-773b-497e-9b50-c49c32444cf0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:18:51.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-819" for this suite. • [SLOW TEST:6.692 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":701,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:18:51.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 26 21:18:55.380: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:18:55.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8681" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":715,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:18:55.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-b4e77996-760e-4c77-9159-32ea9487a5fd STEP: Creating a pod to test consume configMaps Apr 26 21:18:55.474: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a844ab3-1d3d-4eaa-8818-be9e9272f92e" in namespace "projected-2689" to be "success or failure" Apr 26 21:18:55.499: INFO: Pod "pod-projected-configmaps-1a844ab3-1d3d-4eaa-8818-be9e9272f92e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.394541ms Apr 26 21:18:57.515: INFO: Pod "pod-projected-configmaps-1a844ab3-1d3d-4eaa-8818-be9e9272f92e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040687385s Apr 26 21:18:59.539: INFO: Pod "pod-projected-configmaps-1a844ab3-1d3d-4eaa-8818-be9e9272f92e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064677061s STEP: Saw pod success Apr 26 21:18:59.539: INFO: Pod "pod-projected-configmaps-1a844ab3-1d3d-4eaa-8818-be9e9272f92e" satisfied condition "success or failure" Apr 26 21:18:59.542: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-1a844ab3-1d3d-4eaa-8818-be9e9272f92e container projected-configmap-volume-test: STEP: delete the pod Apr 26 21:18:59.564: INFO: Waiting for pod pod-projected-configmaps-1a844ab3-1d3d-4eaa-8818-be9e9272f92e to disappear Apr 26 21:18:59.569: INFO: Pod pod-projected-configmaps-1a844ab3-1d3d-4eaa-8818-be9e9272f92e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:18:59.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2689" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:18:59.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:18:59.799: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf5d4dfa-5303-40b9-ba8e-21e9f58aef30" in namespace "projected-7924" to be "success or failure" Apr 26 21:18:59.881: INFO: Pod "downwardapi-volume-cf5d4dfa-5303-40b9-ba8e-21e9f58aef30": Phase="Pending", Reason="", readiness=false. Elapsed: 81.776022ms Apr 26 21:19:01.883: INFO: Pod "downwardapi-volume-cf5d4dfa-5303-40b9-ba8e-21e9f58aef30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084115451s Apr 26 21:19:03.888: INFO: Pod "downwardapi-volume-cf5d4dfa-5303-40b9-ba8e-21e9f58aef30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088761718s STEP: Saw pod success Apr 26 21:19:03.888: INFO: Pod "downwardapi-volume-cf5d4dfa-5303-40b9-ba8e-21e9f58aef30" satisfied condition "success or failure" Apr 26 21:19:03.891: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-cf5d4dfa-5303-40b9-ba8e-21e9f58aef30 container client-container: STEP: delete the pod Apr 26 21:19:03.954: INFO: Waiting for pod downwardapi-volume-cf5d4dfa-5303-40b9-ba8e-21e9f58aef30 to disappear Apr 26 21:19:03.970: INFO: Pod downwardapi-volume-cf5d4dfa-5303-40b9-ba8e-21e9f58aef30 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:19:03.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7924" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:19:03.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-rgn4 STEP: Creating a pod to test atomic-volume-subpath Apr 26 21:19:04.122: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rgn4" in namespace "subpath-7240" to be "success or failure" Apr 26 21:19:04.155: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Pending", Reason="", readiness=false. Elapsed: 33.402262ms Apr 26 21:19:06.158: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036796639s Apr 26 21:19:08.162: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Running", Reason="", readiness=true. Elapsed: 4.040820723s Apr 26 21:19:10.166: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Running", Reason="", readiness=true. Elapsed: 6.044718981s Apr 26 21:19:12.170: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Running", Reason="", readiness=true. Elapsed: 8.04843571s Apr 26 21:19:14.173: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Running", Reason="", readiness=true. Elapsed: 10.051435705s Apr 26 21:19:16.178: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Running", Reason="", readiness=true. Elapsed: 12.056105334s Apr 26 21:19:18.182: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Running", Reason="", readiness=true. Elapsed: 14.060106745s Apr 26 21:19:20.186: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Running", Reason="", readiness=true. Elapsed: 16.064642855s Apr 26 21:19:22.190: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Running", Reason="", readiness=true. Elapsed: 18.068684528s Apr 26 21:19:24.195: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Running", Reason="", readiness=true. Elapsed: 20.073000029s Apr 26 21:19:26.199: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Running", Reason="", readiness=true. Elapsed: 22.077888768s Apr 26 21:19:28.208: INFO: Pod "pod-subpath-test-configmap-rgn4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.086800085s STEP: Saw pod success Apr 26 21:19:28.208: INFO: Pod "pod-subpath-test-configmap-rgn4" satisfied condition "success or failure" Apr 26 21:19:28.211: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-rgn4 container test-container-subpath-configmap-rgn4: STEP: delete the pod Apr 26 21:19:28.230: INFO: Waiting for pod pod-subpath-test-configmap-rgn4 to disappear Apr 26 21:19:28.252: INFO: Pod pod-subpath-test-configmap-rgn4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-rgn4 Apr 26 21:19:28.252: INFO: Deleting pod "pod-subpath-test-configmap-rgn4" in namespace "subpath-7240" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:19:28.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7240" for this suite. • [SLOW TEST:24.284 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":43,"skipped":806,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:19:28.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Apr 26 21:19:32.364: INFO: Pod pod-hostip-b40c87ab-88dc-4931-b4eb-4c61b59e3b9a has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:19:32.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3392" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":817,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:19:32.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 26 21:19:32.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9014' Apr 26 21:19:32.739: INFO: stderr: "" Apr 26 21:19:32.739: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Apr 26 21:19:32.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9014' Apr 26 21:19:36.855: INFO: stderr: "" Apr 26 21:19:36.855: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:19:36.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9014" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":45,"skipped":823,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:19:36.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:19:36.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e321a143-61d7-49c3-90c2-9ca0ec03ad9a" in namespace "projected-6711" to be "success or failure" Apr 26 21:19:36.949: INFO: Pod "downwardapi-volume-e321a143-61d7-49c3-90c2-9ca0ec03ad9a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.474158ms Apr 26 21:19:38.953: INFO: Pod "downwardapi-volume-e321a143-61d7-49c3-90c2-9ca0ec03ad9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021861782s Apr 26 21:19:40.956: INFO: Pod "downwardapi-volume-e321a143-61d7-49c3-90c2-9ca0ec03ad9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025340798s STEP: Saw pod success Apr 26 21:19:40.956: INFO: Pod "downwardapi-volume-e321a143-61d7-49c3-90c2-9ca0ec03ad9a" satisfied condition "success or failure" Apr 26 21:19:40.959: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e321a143-61d7-49c3-90c2-9ca0ec03ad9a container client-container: STEP: delete the pod Apr 26 21:19:40.978: INFO: Waiting for pod downwardapi-volume-e321a143-61d7-49c3-90c2-9ca0ec03ad9a to disappear Apr 26 21:19:40.982: INFO: Pod downwardapi-volume-e321a143-61d7-49c3-90c2-9ca0ec03ad9a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:19:40.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6711" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:19:40.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-05995357-657f-4c75-9272-6ec9de0a48ec STEP: Creating a pod to test consume configMaps Apr 26 21:19:41.220: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7c65d982-2343-494c-8bea-f72785e703db" in namespace "projected-536" to be "success or failure" Apr 26 21:19:41.246: INFO: Pod "pod-projected-configmaps-7c65d982-2343-494c-8bea-f72785e703db": Phase="Pending", Reason="", readiness=false. Elapsed: 26.334492ms Apr 26 21:19:43.250: INFO: Pod "pod-projected-configmaps-7c65d982-2343-494c-8bea-f72785e703db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030030847s Apr 26 21:19:45.254: INFO: Pod "pod-projected-configmaps-7c65d982-2343-494c-8bea-f72785e703db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034037447s STEP: Saw pod success Apr 26 21:19:45.254: INFO: Pod "pod-projected-configmaps-7c65d982-2343-494c-8bea-f72785e703db" satisfied condition "success or failure" Apr 26 21:19:45.257: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-7c65d982-2343-494c-8bea-f72785e703db container projected-configmap-volume-test: STEP: delete the pod Apr 26 21:19:45.271: INFO: Waiting for pod pod-projected-configmaps-7c65d982-2343-494c-8bea-f72785e703db to disappear Apr 26 21:19:45.276: INFO: Pod pod-projected-configmaps-7c65d982-2343-494c-8bea-f72785e703db no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:19:45.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-536" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":884,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:19:45.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 26 21:19:45.380: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:19:52.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4464" for this suite. • [SLOW TEST:7.275 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":48,"skipped":895,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:19:52.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0426 21:19:53.776203 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 21:19:53.776: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:19:53.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8765" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":49,"skipped":904,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:19:53.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 26 21:19:53.847: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:19:59.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5574" for this suite. • [SLOW TEST:6.142 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":50,"skipped":946,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:19:59.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d88cc0aa-2dc5-4183-bf9c-03ed5a084345 STEP: Creating a pod to test consume secrets Apr 26 21:20:00.015: INFO: Waiting up to 5m0s for pod "pod-secrets-d5297153-5065-44c4-887a-8b6cfefa0afa" in namespace "secrets-5947" to be "success or failure" Apr 26 21:20:00.033: INFO: Pod "pod-secrets-d5297153-5065-44c4-887a-8b6cfefa0afa": Phase="Pending", Reason="", readiness=false. Elapsed: 18.004204ms Apr 26 21:20:02.091: INFO: Pod "pod-secrets-d5297153-5065-44c4-887a-8b6cfefa0afa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076097237s Apr 26 21:20:04.095: INFO: Pod "pod-secrets-d5297153-5065-44c4-887a-8b6cfefa0afa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079881949s STEP: Saw pod success Apr 26 21:20:04.095: INFO: Pod "pod-secrets-d5297153-5065-44c4-887a-8b6cfefa0afa" satisfied condition "success or failure" Apr 26 21:20:04.098: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d5297153-5065-44c4-887a-8b6cfefa0afa container secret-volume-test: STEP: delete the pod Apr 26 21:20:04.135: INFO: Waiting for pod pod-secrets-d5297153-5065-44c4-887a-8b6cfefa0afa to disappear Apr 26 21:20:04.157: INFO: Pod pod-secrets-d5297153-5065-44c4-887a-8b6cfefa0afa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:20:04.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5947" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":964,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:20:04.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3419 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-3419 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3419 Apr 26 21:20:04.240: INFO: Found 0 stateful pods, waiting for 1 Apr 26 21:20:14.245: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 26 21:20:14.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 21:20:17.516: INFO: stderr: "I0426 21:20:17.389605 355 log.go:172] (0xc0008140b0) (0xc000f5e0a0) Create stream\nI0426 21:20:17.389662 355 log.go:172] (0xc0008140b0) (0xc000f5e0a0) Stream added, broadcasting: 1\nI0426 21:20:17.392749 355 log.go:172] (0xc0008140b0) Reply frame received for 1\nI0426 21:20:17.392804 355 log.go:172] (0xc0008140b0) (0xc0008e6000) Create stream\nI0426 21:20:17.392821 355 log.go:172] (0xc0008140b0) (0xc0008e6000) Stream added, broadcasting: 3\nI0426 21:20:17.393763 355 log.go:172] (0xc0008140b0) Reply frame received for 3\nI0426 21:20:17.393790 355 log.go:172] (0xc0008140b0) (0xc000f5e140) Create stream\nI0426 21:20:17.393799 355 log.go:172] (0xc0008140b0) (0xc000f5e140) Stream added, broadcasting: 5\nI0426 21:20:17.394619 355 log.go:172] (0xc0008140b0) Reply frame received for 5\nI0426 21:20:17.474326 355 log.go:172] (0xc0008140b0) Data frame received for 5\nI0426 21:20:17.474372 355 log.go:172] (0xc000f5e140) (5) Data frame handling\nI0426 21:20:17.474412 355 log.go:172] (0xc000f5e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 21:20:17.506865 355 log.go:172] (0xc0008140b0) Data frame received for 5\nI0426 21:20:17.506917 355 log.go:172] (0xc000f5e140) (5) Data frame handling\nI0426 21:20:17.506950 355 log.go:172] (0xc0008140b0) Data frame received for 3\nI0426 21:20:17.506964 355 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0426 21:20:17.506990 355 log.go:172] (0xc0008e6000) (3) Data frame sent\nI0426 21:20:17.507007 355 log.go:172] (0xc0008140b0) Data frame received for 3\nI0426 21:20:17.507024 355 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0426 21:20:17.508942 355 log.go:172] (0xc0008140b0) Data frame received for 1\nI0426 21:20:17.508988 355 log.go:172] (0xc000f5e0a0) (1) Data frame handling\nI0426 21:20:17.509004 355 log.go:172] (0xc000f5e0a0) (1) Data frame sent\nI0426 21:20:17.509017 355 log.go:172] (0xc0008140b0) (0xc000f5e0a0) Stream removed, broadcasting: 1\nI0426 21:20:17.509058 355 log.go:172] (0xc0008140b0) Go away received\nI0426 21:20:17.509518 355 log.go:172] (0xc0008140b0) (0xc000f5e0a0) Stream removed, broadcasting: 1\nI0426 21:20:17.509537 355 log.go:172] (0xc0008140b0) (0xc0008e6000) Stream removed, broadcasting: 3\nI0426 21:20:17.509548 355 log.go:172] (0xc0008140b0) (0xc000f5e140) Stream removed, broadcasting: 5\n" Apr 26 21:20:17.516: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 21:20:17.516: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 21:20:17.520: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 26 21:20:27.525: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 26 21:20:27.525: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 21:20:27.564: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:20:27.565: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:04 +0000 UTC }] Apr 26 21:20:27.565: INFO: Apr 26 21:20:27.565: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 26 21:20:28.571: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968925976s Apr 26 21:20:29.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.962911578s Apr 26 21:20:30.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.958581027s Apr 26 21:20:31.584: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.954677067s Apr 26 21:20:32.619: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.949781048s Apr 26 21:20:33.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.914034913s Apr 26 21:20:34.636: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.908545233s Apr 26 21:20:35.642: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.897142449s Apr 26 21:20:36.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 891.682108ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3419 Apr 26 21:20:37.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:20:37.842: INFO: stderr: "I0426 21:20:37.778797 391 log.go:172] (0xc000a48e70) (0xc000663e00) Create stream\nI0426 21:20:37.778874 391 log.go:172] (0xc000a48e70) (0xc000663e00) Stream added, broadcasting: 1\nI0426 21:20:37.782044 391 log.go:172] (0xc000a48e70) Reply frame received for 1\nI0426 21:20:37.782078 391 log.go:172] (0xc000a48e70) (0xc000663ea0) Create stream\nI0426 21:20:37.782089 391 log.go:172] (0xc000a48e70) (0xc000663ea0) Stream added, broadcasting: 3\nI0426 21:20:37.782997 391 log.go:172] (0xc000a48e70) Reply frame received for 3\nI0426 21:20:37.783026 391 log.go:172] (0xc000a48e70) (0xc000a26000) Create stream\nI0426 21:20:37.783035 391 log.go:172] (0xc000a48e70) (0xc000a26000) Stream added, broadcasting: 5\nI0426 21:20:37.783786 391 log.go:172] (0xc000a48e70) Reply frame received for 5\nI0426 21:20:37.835552 391 log.go:172] (0xc000a48e70) Data frame received for 3\nI0426 21:20:37.835600 391 log.go:172] (0xc000663ea0) (3) Data frame handling\nI0426 21:20:37.835631 391 log.go:172] (0xc000a48e70) Data frame received for 5\nI0426 21:20:37.835676 391 log.go:172] (0xc000a26000) (5) Data frame handling\nI0426 21:20:37.835693 391 log.go:172] (0xc000a26000) (5) Data frame sent\nI0426 21:20:37.835713 391 log.go:172] (0xc000a48e70) Data frame received for 5\nI0426 21:20:37.835723 391 log.go:172] (0xc000a26000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0426 21:20:37.835755 391 log.go:172] (0xc000663ea0) (3) Data frame sent\nI0426 21:20:37.835792 391 log.go:172] (0xc000a48e70) Data frame received for 3\nI0426 21:20:37.835820 391 log.go:172] (0xc000663ea0) (3) Data frame handling\nI0426 21:20:37.836995 391 log.go:172] (0xc000a48e70) Data frame received for 1\nI0426 21:20:37.837030 391 log.go:172] (0xc000663e00) (1) Data frame handling\nI0426 21:20:37.837045 391 log.go:172] (0xc000663e00) (1) Data frame sent\nI0426 21:20:37.837066 391 log.go:172] (0xc000a48e70) (0xc000663e00) Stream removed, broadcasting: 1\nI0426 21:20:37.837095 391 log.go:172] (0xc000a48e70) Go away received\nI0426 21:20:37.837546 391 log.go:172] (0xc000a48e70) (0xc000663e00) Stream removed, broadcasting: 1\nI0426 21:20:37.837566 391 log.go:172] (0xc000a48e70) (0xc000663ea0) Stream removed, broadcasting: 3\nI0426 21:20:37.837577 391 log.go:172] (0xc000a48e70) (0xc000a26000) Stream removed, broadcasting: 5\n" Apr 26 21:20:37.842: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 21:20:37.842: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 21:20:37.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:20:38.059: INFO: stderr: "I0426 21:20:37.974114 411 log.go:172] (0xc000a6e790) (0xc00053c960) Create stream\nI0426 21:20:37.974177 411 log.go:172] (0xc000a6e790) (0xc00053c960) Stream added, broadcasting: 1\nI0426 21:20:37.976804 411 log.go:172] (0xc000a6e790) Reply frame received for 1\nI0426 21:20:37.976867 411 log.go:172] (0xc000a6e790) (0xc00099a000) Create stream\nI0426 21:20:37.976890 411 log.go:172] (0xc000a6e790) (0xc00099a000) Stream added, broadcasting: 3\nI0426 21:20:37.978188 411 log.go:172] (0xc000a6e790) Reply frame received for 3\nI0426 21:20:37.978234 411 log.go:172] (0xc000a6e790) (0xc000607a40) Create stream\nI0426 21:20:37.978254 411 log.go:172] (0xc000a6e790) (0xc000607a40) Stream added, broadcasting: 5\nI0426 21:20:37.979250 411 log.go:172] (0xc000a6e790) Reply frame received for 5\nI0426 21:20:38.053611 411 log.go:172] (0xc000a6e790) Data frame received for 3\nI0426 21:20:38.053633 411 log.go:172] (0xc00099a000) (3) Data frame handling\nI0426 21:20:38.053646 411 log.go:172] (0xc00099a000) (3) Data frame sent\nI0426 21:20:38.053652 411 log.go:172] (0xc000a6e790) Data frame received for 3\nI0426 21:20:38.053656 411 log.go:172] (0xc00099a000) (3) Data frame handling\nI0426 21:20:38.053851 411 log.go:172] (0xc000a6e790) Data frame received for 5\nI0426 21:20:38.053881 411 log.go:172] (0xc000607a40) (5) Data frame handling\nI0426 21:20:38.053895 411 log.go:172] (0xc000607a40) (5) Data frame sent\nI0426 21:20:38.053905 411 log.go:172] (0xc000a6e790) Data frame received for 5\nI0426 21:20:38.053915 411 log.go:172] (0xc000607a40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0426 21:20:38.055329 411 log.go:172] (0xc000a6e790) Data frame received for 1\nI0426 21:20:38.055344 411 log.go:172] (0xc00053c960) (1) Data frame handling\nI0426 21:20:38.055351 411 log.go:172] (0xc00053c960) (1) Data frame sent\nI0426 21:20:38.055363 411 log.go:172] (0xc000a6e790) (0xc00053c960) Stream removed, broadcasting: 1\nI0426 21:20:38.055608 411 log.go:172] (0xc000a6e790) Go away received\nI0426 21:20:38.055662 411 log.go:172] (0xc000a6e790) (0xc00053c960) Stream removed, broadcasting: 1\nI0426 21:20:38.055683 411 log.go:172] (0xc000a6e790) (0xc00099a000) Stream removed, broadcasting: 3\nI0426 21:20:38.055691 411 log.go:172] (0xc000a6e790) (0xc000607a40) Stream removed, broadcasting: 5\n" Apr 26 21:20:38.059: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 21:20:38.059: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 21:20:38.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:20:38.254: INFO: stderr: "I0426 21:20:38.184136 432 log.go:172] (0xc0009be000) (0xc000a0e000) Create stream\nI0426 21:20:38.184180 432 log.go:172] (0xc0009be000) (0xc000a0e000) Stream added, broadcasting: 1\nI0426 21:20:38.186311 432 log.go:172] (0xc0009be000) Reply frame received for 1\nI0426 21:20:38.186349 432 log.go:172] (0xc0009be000) (0xc0007594a0) Create stream\nI0426 21:20:38.186359 432 log.go:172] (0xc0009be000) (0xc0007594a0) Stream added, broadcasting: 3\nI0426 21:20:38.187309 432 log.go:172] (0xc0009be000) Reply frame received for 3\nI0426 21:20:38.187365 432 log.go:172] (0xc0009be000) (0xc0008f0000) Create stream\nI0426 21:20:38.187383 432 log.go:172] (0xc0009be000) (0xc0008f0000) Stream added, broadcasting: 5\nI0426 21:20:38.188366 432 log.go:172] (0xc0009be000) Reply frame received for 5\nI0426 21:20:38.246063 432 log.go:172] (0xc0009be000) Data frame received for 5\nI0426 21:20:38.246192 432 log.go:172] (0xc0008f0000) (5) Data frame handling\nI0426 21:20:38.246219 432 log.go:172] (0xc0008f0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0426 21:20:38.246250 432 log.go:172] (0xc0009be000) Data frame received for 3\nI0426 21:20:38.246264 432 log.go:172] (0xc0007594a0) (3) Data frame handling\nI0426 21:20:38.246280 432 log.go:172] (0xc0007594a0) (3) Data frame sent\nI0426 21:20:38.246298 432 log.go:172] (0xc0009be000) Data frame received for 3\nI0426 21:20:38.246308 432 log.go:172] (0xc0007594a0) (3) Data frame handling\nI0426 21:20:38.246336 432 log.go:172] (0xc0009be000) Data frame received for 5\nI0426 21:20:38.246358 432 log.go:172] (0xc0008f0000) (5) Data frame handling\nI0426 21:20:38.248111 432 log.go:172] (0xc0009be000) Data frame received for 1\nI0426 21:20:38.248135 432 log.go:172] (0xc000a0e000) (1) Data frame handling\nI0426 21:20:38.248149 432 log.go:172] (0xc000a0e000) (1) Data frame sent\nI0426 21:20:38.248160 432 log.go:172] (0xc0009be000) (0xc000a0e000) Stream removed, broadcasting: 1\nI0426 21:20:38.248174 432 log.go:172] (0xc0009be000) Go away received\nI0426 21:20:38.248523 432 log.go:172] (0xc0009be000) (0xc000a0e000) Stream removed, broadcasting: 1\nI0426 21:20:38.248541 432 log.go:172] (0xc0009be000) (0xc0007594a0) Stream removed, broadcasting: 3\nI0426 21:20:38.248551 432 log.go:172] (0xc0009be000) (0xc0008f0000) Stream removed, broadcasting: 5\n" Apr 26 21:20:38.254: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 21:20:38.254: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 21:20:38.258: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 26 21:20:48.263: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 21:20:48.263: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 21:20:48.263: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 26 21:20:48.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 21:20:48.477: INFO: stderr: "I0426 21:20:48.389802 452 log.go:172] (0xc0000f5760) (0xc0005f5c20) Create stream\nI0426 21:20:48.389856 452 log.go:172] (0xc0000f5760) (0xc0005f5c20) Stream added, broadcasting: 1\nI0426 21:20:48.392899 452 log.go:172] (0xc0000f5760) Reply frame received for 1\nI0426 21:20:48.392937 452 log.go:172] (0xc0000f5760) (0xc00081c000) Create stream\nI0426 21:20:48.392960 452 log.go:172] (0xc0000f5760) (0xc00081c000) Stream added, broadcasting: 3\nI0426 21:20:48.394219 452 log.go:172] (0xc0000f5760) Reply frame received for 3\nI0426 21:20:48.394278 452 log.go:172] (0xc0000f5760) (0xc00021e000) Create stream\nI0426 21:20:48.394296 452 log.go:172] (0xc0000f5760) (0xc00021e000) Stream added, broadcasting: 5\nI0426 21:20:48.395372 452 log.go:172] (0xc0000f5760) Reply frame received for 5\nI0426 21:20:48.468797 452 log.go:172] (0xc0000f5760) Data frame received for 3\nI0426 21:20:48.468827 452 log.go:172] (0xc00081c000) (3) Data frame handling\nI0426 21:20:48.468839 452 log.go:172] (0xc00081c000) (3) Data frame sent\nI0426 21:20:48.468846 452 log.go:172] (0xc0000f5760) Data frame received for 3\nI0426 21:20:48.468850 452 log.go:172] (0xc00081c000) (3) Data frame handling\nI0426 21:20:48.468880 452 log.go:172] (0xc0000f5760) Data frame received for 5\nI0426 21:20:48.468890 452 log.go:172] (0xc00021e000) (5) Data frame handling\nI0426 21:20:48.468897 452 log.go:172] (0xc00021e000) (5) Data frame sent\nI0426 21:20:48.468903 452 log.go:172] (0xc0000f5760) Data frame received for 5\nI0426 21:20:48.468907 452 log.go:172] (0xc00021e000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 21:20:48.471258 452 log.go:172] (0xc0000f5760) Data frame received for 1\nI0426 21:20:48.471304 452 log.go:172] (0xc0005f5c20) (1) Data frame handling\nI0426 21:20:48.471340 452 log.go:172] (0xc0005f5c20) (1) Data frame sent\nI0426 21:20:48.471378 452 log.go:172] (0xc0000f5760) (0xc0005f5c20) Stream removed, broadcasting: 1\nI0426 21:20:48.471421 452 log.go:172] (0xc0000f5760) Go away received\nI0426 21:20:48.471945 452 log.go:172] (0xc0000f5760) (0xc0005f5c20) Stream removed, broadcasting: 1\nI0426 21:20:48.471972 452 log.go:172] (0xc0000f5760) (0xc00081c000) Stream removed, broadcasting: 3\nI0426 21:20:48.471986 452 log.go:172] (0xc0000f5760) (0xc00021e000) Stream removed, broadcasting: 5\n" Apr 26 21:20:48.477: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 21:20:48.477: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 21:20:48.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 21:20:48.717: INFO: stderr: "I0426 21:20:48.603141 473 log.go:172] (0xc000588dc0) (0xc000673e00) Create stream\nI0426 21:20:48.603206 473 log.go:172] (0xc000588dc0) (0xc000673e00) Stream added, broadcasting: 1\nI0426 21:20:48.605890 473 log.go:172] (0xc000588dc0) Reply frame received for 1\nI0426 21:20:48.605931 473 log.go:172] (0xc000588dc0) (0xc000673ea0) Create stream\nI0426 21:20:48.605942 473 log.go:172] (0xc000588dc0) (0xc000673ea0) Stream added, broadcasting: 3\nI0426 21:20:48.606956 473 log.go:172] (0xc000588dc0) Reply frame received for 3\nI0426 21:20:48.607009 473 log.go:172] (0xc000588dc0) (0xc000538b40) Create stream\nI0426 21:20:48.607024 473 log.go:172] (0xc000588dc0) (0xc000538b40) Stream added, broadcasting: 5\nI0426 21:20:48.607992 473 log.go:172] (0xc000588dc0) Reply frame received for 5\nI0426 21:20:48.682398 473 log.go:172] (0xc000588dc0) Data frame received for 5\nI0426 21:20:48.682431 473 log.go:172] (0xc000538b40) (5) Data frame handling\nI0426 21:20:48.682451 473 log.go:172] (0xc000538b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 21:20:48.708425 473 log.go:172] (0xc000588dc0) Data frame received for 3\nI0426 21:20:48.708446 473 log.go:172] (0xc000673ea0) (3) Data frame handling\nI0426 21:20:48.708459 473 log.go:172] (0xc000673ea0) (3) Data frame sent\nI0426 21:20:48.708669 473 log.go:172] (0xc000588dc0) Data frame received for 3\nI0426 21:20:48.708687 473 log.go:172] (0xc000673ea0) (3) Data frame handling\nI0426 21:20:48.708729 473 log.go:172] (0xc000588dc0) Data frame received for 5\nI0426 21:20:48.708762 473 log.go:172] (0xc000538b40) (5) Data frame handling\nI0426 21:20:48.710872 473 log.go:172] (0xc000588dc0) Data frame received for 1\nI0426 21:20:48.710897 473 log.go:172] (0xc000673e00) (1) Data frame handling\nI0426 21:20:48.710911 473 log.go:172] (0xc000673e00) (1) Data frame sent\nI0426 21:20:48.710930 473 log.go:172] (0xc000588dc0) (0xc000673e00) Stream removed, broadcasting: 1\nI0426 21:20:48.710947 473 log.go:172] (0xc000588dc0) Go away received\nI0426 21:20:48.711257 473 log.go:172] (0xc000588dc0) (0xc000673e00) Stream removed, broadcasting: 1\nI0426 21:20:48.711277 473 log.go:172] (0xc000588dc0) (0xc000673ea0) Stream removed, broadcasting: 3\nI0426 21:20:48.711284 473 log.go:172] (0xc000588dc0) (0xc000538b40) Stream removed, broadcasting: 5\n" Apr 26 21:20:48.717: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 21:20:48.717: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 21:20:48.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 21:20:48.973: INFO: stderr: "I0426 21:20:48.852946 494 log.go:172] (0xc0009840b0) (0xc0006b5e00) Create stream\nI0426 21:20:48.853291 494 log.go:172] (0xc0009840b0) (0xc0006b5e00) Stream added, broadcasting: 1\nI0426 21:20:48.855725 494 log.go:172] (0xc0009840b0) Reply frame received for 1\nI0426 21:20:48.855794 494 log.go:172] (0xc0009840b0) (0xc00066c780) Create stream\nI0426 21:20:48.855822 494 log.go:172] (0xc0009840b0) (0xc00066c780) Stream added, broadcasting: 3\nI0426 21:20:48.856629 494 log.go:172] (0xc0009840b0) Reply frame received for 3\nI0426 21:20:48.856671 494 log.go:172] (0xc0009840b0) (0xc0006b5ea0) Create stream\nI0426 21:20:48.856680 494 log.go:172] (0xc0009840b0) (0xc0006b5ea0) Stream added, broadcasting: 5\nI0426 21:20:48.857655 494 log.go:172] (0xc0009840b0) Reply frame received for 5\nI0426 21:20:48.922473 494 log.go:172] (0xc0009840b0) Data frame received for 5\nI0426 21:20:48.922507 494 log.go:172] (0xc0006b5ea0) (5) Data frame handling\nI0426 21:20:48.922523 494 log.go:172] (0xc0006b5ea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 21:20:48.966702 494 log.go:172] (0xc0009840b0) Data frame received for 3\nI0426 21:20:48.966735 494 log.go:172] (0xc00066c780) (3) Data frame handling\nI0426 21:20:48.966744 494 log.go:172] (0xc00066c780) (3) Data frame sent\nI0426 21:20:48.966750 494 log.go:172] (0xc0009840b0) Data frame received for 3\nI0426 21:20:48.966755 494 log.go:172] (0xc00066c780) (3) Data frame handling\nI0426 21:20:48.966787 494 log.go:172] (0xc0009840b0) Data frame received for 5\nI0426 21:20:48.966804 494 log.go:172] (0xc0006b5ea0) (5) Data frame handling\nI0426 21:20:48.968192 494 log.go:172] (0xc0009840b0) Data frame received for 1\nI0426 21:20:48.968217 494 log.go:172] (0xc0006b5e00) (1) Data frame handling\nI0426 21:20:48.968240 494 log.go:172] (0xc0006b5e00) (1) Data frame sent\nI0426 21:20:48.968259 494 log.go:172] (0xc0009840b0) (0xc0006b5e00) Stream removed, broadcasting: 1\nI0426 21:20:48.968279 494 log.go:172] (0xc0009840b0) Go away received\nI0426 21:20:48.968660 494 log.go:172] (0xc0009840b0) (0xc0006b5e00) Stream removed, broadcasting: 1\nI0426 21:20:48.968681 494 log.go:172] (0xc0009840b0) (0xc00066c780) Stream removed, broadcasting: 3\nI0426 21:20:48.968701 494 log.go:172] (0xc0009840b0) (0xc0006b5ea0) Stream removed, broadcasting: 5\n" Apr 26 21:20:48.973: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 21:20:48.973: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 21:20:48.973: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 21:20:48.977: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 26 21:20:58.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 26 21:20:58.985: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 26 21:20:58.985: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 26 21:20:58.998: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:20:58.998: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:04 +0000 UTC }] Apr 26 21:20:58.999: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:20:58.999: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:20:58.999: INFO: Apr 26 21:20:58.999: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 21:21:00.002: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:21:00.002: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:04 +0000 UTC }] Apr 26 21:21:00.002: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:00.002: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:00.003: INFO: Apr 26 21:21:00.003: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 21:21:01.230: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:21:01.230: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:04 +0000 UTC }] Apr 26 21:21:01.230: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:01.230: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:01.230: INFO: Apr 26 21:21:01.230: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 26 21:21:02.234: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:21:02.234: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:02.234: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:02.234: INFO: Apr 26 21:21:02.234: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 26 21:21:03.239: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:21:03.239: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:03.239: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:03.239: INFO: Apr 26 21:21:03.239: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 26 21:21:04.244: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:21:04.244: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:04.244: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:04.244: INFO: Apr 26 21:21:04.244: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 26 21:21:05.249: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:21:05.249: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:05.249: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:05.249: INFO: Apr 26 21:21:05.249: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 26 21:21:06.254: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:21:06.254: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:06.254: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:06.254: INFO: Apr 26 21:21:06.254: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 26 21:21:07.259: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:21:07.259: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:07.259: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:07.259: INFO: Apr 26 21:21:07.259: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 26 21:21:08.264: INFO: POD NODE PHASE GRACE CONDITIONS Apr 26 21:21:08.264: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:08.264: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-26 21:20:27 +0000 UTC }] Apr 26 21:21:08.264: INFO: Apr 26 21:21:08.264: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3419 Apr 26 21:21:09.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:21:09.463: INFO: rc: 1 Apr 26 21:21:09.463: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:21:19.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:21:19.568: INFO: rc: 1 Apr 26 21:21:19.568: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:21:29.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:21:29.667: INFO: rc: 1 Apr 26 21:21:29.667: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:21:39.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:21:39.797: INFO: rc: 1 Apr 26 21:21:39.797: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:21:49.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:21:49.903: INFO: rc: 1 Apr 26 21:21:49.903: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:21:59.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:22:00.005: INFO: rc: 1 Apr 26 21:22:00.005: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:22:10.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:22:10.126: INFO: rc: 1 Apr 26 21:22:10.126: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:22:20.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:22:20.216: INFO: rc: 1 Apr 26 21:22:20.216: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:22:30.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:22:30.319: INFO: rc: 1 Apr 26 21:22:30.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:22:40.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:22:40.419: INFO: rc: 1 Apr 26 21:22:40.419: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:22:50.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:22:50.523: INFO: rc: 1 Apr 26 21:22:50.523: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:23:00.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:23:00.626: INFO: rc: 1 Apr 26 21:23:00.626: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:23:10.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:23:10.730: INFO: rc: 1 Apr 26 21:23:10.730: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:23:20.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:23:20.835: INFO: rc: 1 Apr 26 21:23:20.835: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:23:30.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:23:30.930: INFO: rc: 1 Apr 26 21:23:30.930: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:23:40.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:23:41.027: INFO: rc: 1 Apr 26 21:23:41.027: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:23:51.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:23:51.130: INFO: rc: 1 Apr 26 21:23:51.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:24:01.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:24:01.247: INFO: rc: 1 Apr 26 21:24:01.247: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:24:11.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:24:11.346: INFO: rc: 1 Apr 26 21:24:11.346: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:24:21.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:24:21.461: INFO: rc: 1 Apr 26 21:24:21.461: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:24:31.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:24:31.555: INFO: rc: 1 Apr 26 21:24:31.555: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:24:41.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:24:41.647: INFO: rc: 1 Apr 26 21:24:41.647: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:24:51.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:24:51.748: INFO: rc: 1 Apr 26 21:24:51.748: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:25:01.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:25:01.857: INFO: rc: 1 Apr 26 21:25:01.857: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:25:11.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:25:11.973: INFO: rc: 1 Apr 26 21:25:11.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:25:21.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:25:22.080: INFO: rc: 1 Apr 26 21:25:22.080: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:25:32.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:25:32.185: INFO: rc: 1 Apr 26 21:25:32.185: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:25:42.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:25:42.282: INFO: rc: 1 Apr 26 21:25:42.283: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:25:52.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:25:52.386: INFO: rc: 1 Apr 26 21:25:52.386: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:26:02.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:26:02.485: INFO: rc: 1 Apr 26 21:26:02.485: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 26 21:26:12.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3419 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 21:26:12.586: INFO: rc: 1 Apr 26 21:26:12.586: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Apr 26 21:26:12.586: INFO: Scaling statefulset ss to 0 Apr 26 21:26:12.602: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 26 21:26:12.604: INFO: Deleting all statefulset in ns statefulset-3419 Apr 26 21:26:12.607: INFO: Scaling statefulset ss to 0 Apr 26 21:26:12.614: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 21:26:12.616: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:26:12.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3419" for this suite. • [SLOW TEST:368.474 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":52,"skipped":968,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:26:12.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 26 21:26:12.697: INFO: Waiting up to 5m0s for pod "pod-fd758b78-fa6e-49ba-9097-b71db1290464" in namespace "emptydir-6228" to be "success or failure" Apr 26 21:26:12.710: INFO: Pod "pod-fd758b78-fa6e-49ba-9097-b71db1290464": Phase="Pending", Reason="", readiness=false. Elapsed: 13.174132ms Apr 26 21:26:14.714: INFO: Pod "pod-fd758b78-fa6e-49ba-9097-b71db1290464": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017341731s Apr 26 21:26:16.719: INFO: Pod "pod-fd758b78-fa6e-49ba-9097-b71db1290464": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022047282s STEP: Saw pod success Apr 26 21:26:16.719: INFO: Pod "pod-fd758b78-fa6e-49ba-9097-b71db1290464" satisfied condition "success or failure" Apr 26 21:26:16.723: INFO: Trying to get logs from node jerma-worker pod pod-fd758b78-fa6e-49ba-9097-b71db1290464 container test-container: STEP: delete the pod Apr 26 21:26:16.766: INFO: Waiting for pod pod-fd758b78-fa6e-49ba-9097-b71db1290464 to disappear Apr 26 21:26:16.785: INFO: Pod pod-fd758b78-fa6e-49ba-9097-b71db1290464 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:26:16.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6228" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":996,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:26:16.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 21:26:17.723: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 21:26:19.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533177, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533177, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533177, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533177, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 21:26:22.771: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 26 21:26:26.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4916 to-be-attached-pod -i -c=container1' Apr 26 21:26:26.952: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:26:26.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4916" for this suite. STEP: Destroying namespace "webhook-4916-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.265 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":54,"skipped":1011,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:26:27.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-lvpc STEP: Creating a pod to test atomic-volume-subpath Apr 26 21:26:27.156: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lvpc" in namespace "subpath-1310" to be "success or failure" Apr 26 21:26:27.159: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.340215ms Apr 26 21:26:29.164: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007550249s Apr 26 21:26:31.174: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Running", Reason="", readiness=true. Elapsed: 4.017989611s Apr 26 21:26:33.178: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Running", Reason="", readiness=true. Elapsed: 6.022363538s Apr 26 21:26:35.183: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Running", Reason="", readiness=true. Elapsed: 8.026562869s Apr 26 21:26:37.187: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Running", Reason="", readiness=true. Elapsed: 10.030957627s Apr 26 21:26:39.191: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Running", Reason="", readiness=true. Elapsed: 12.034850193s Apr 26 21:26:41.194: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Running", Reason="", readiness=true. Elapsed: 14.038336349s Apr 26 21:26:43.198: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Running", Reason="", readiness=true. Elapsed: 16.042360077s Apr 26 21:26:45.202: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Running", Reason="", readiness=true. Elapsed: 18.046365809s Apr 26 21:26:47.207: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Running", Reason="", readiness=true. Elapsed: 20.051006772s Apr 26 21:26:49.211: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Running", Reason="", readiness=true. Elapsed: 22.055063682s Apr 26 21:26:51.215: INFO: Pod "pod-subpath-test-secret-lvpc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058690833s STEP: Saw pod success Apr 26 21:26:51.215: INFO: Pod "pod-subpath-test-secret-lvpc" satisfied condition "success or failure" Apr 26 21:26:51.217: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-lvpc container test-container-subpath-secret-lvpc: STEP: delete the pod Apr 26 21:26:51.249: INFO: Waiting for pod pod-subpath-test-secret-lvpc to disappear Apr 26 21:26:51.252: INFO: Pod pod-subpath-test-secret-lvpc no longer exists STEP: Deleting pod pod-subpath-test-secret-lvpc Apr 26 21:26:51.252: INFO: Deleting pod "pod-subpath-test-secret-lvpc" in namespace "subpath-1310" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:26:51.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1310" for this suite. • [SLOW TEST:24.202 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":55,"skipped":1012,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:26:51.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 26 21:26:51.351: INFO: Waiting up to 5m0s for pod "downward-api-693e9f7a-ec43-4a48-83be-5cb2ab23b1c6" in namespace "downward-api-6411" to be "success or failure" Apr 26 21:26:51.370: INFO: Pod "downward-api-693e9f7a-ec43-4a48-83be-5cb2ab23b1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.391499ms Apr 26 21:26:53.374: INFO: Pod "downward-api-693e9f7a-ec43-4a48-83be-5cb2ab23b1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022613211s Apr 26 21:26:55.378: INFO: Pod "downward-api-693e9f7a-ec43-4a48-83be-5cb2ab23b1c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026454385s STEP: Saw pod success Apr 26 21:26:55.378: INFO: Pod "downward-api-693e9f7a-ec43-4a48-83be-5cb2ab23b1c6" satisfied condition "success or failure" Apr 26 21:26:55.380: INFO: Trying to get logs from node jerma-worker2 pod downward-api-693e9f7a-ec43-4a48-83be-5cb2ab23b1c6 container dapi-container: STEP: delete the pod Apr 26 21:26:55.401: INFO: Waiting for pod downward-api-693e9f7a-ec43-4a48-83be-5cb2ab23b1c6 to disappear Apr 26 21:26:55.406: INFO: Pod downward-api-693e9f7a-ec43-4a48-83be-5cb2ab23b1c6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:26:55.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6411" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":1016,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:26:55.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-5cfb94df-323a-4e17-81e0-fe26c8ad140a STEP: Creating a pod to test consume configMaps Apr 26 21:26:55.506: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc03f7bf-5156-4642-b02d-6e29a057532f" in namespace "configmap-7415" to be "success or failure" Apr 26 21:26:55.514: INFO: Pod "pod-configmaps-cc03f7bf-5156-4642-b02d-6e29a057532f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119945ms Apr 26 21:26:57.518: INFO: Pod "pod-configmaps-cc03f7bf-5156-4642-b02d-6e29a057532f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012513876s Apr 26 21:26:59.522: INFO: Pod "pod-configmaps-cc03f7bf-5156-4642-b02d-6e29a057532f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016533548s STEP: Saw pod success Apr 26 21:26:59.522: INFO: Pod "pod-configmaps-cc03f7bf-5156-4642-b02d-6e29a057532f" satisfied condition "success or failure" Apr 26 21:26:59.525: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-cc03f7bf-5156-4642-b02d-6e29a057532f container configmap-volume-test: STEP: delete the pod Apr 26 21:26:59.581: INFO: Waiting for pod pod-configmaps-cc03f7bf-5156-4642-b02d-6e29a057532f to disappear Apr 26 21:26:59.659: INFO: Pod pod-configmaps-cc03f7bf-5156-4642-b02d-6e29a057532f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:26:59.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7415" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1028,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:26:59.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:26:59.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9c70193-0ee2-4dbd-bcc4-bdec9a732fc5" in namespace "projected-7592" to be "success or failure" Apr 26 21:26:59.785: INFO: Pod "downwardapi-volume-d9c70193-0ee2-4dbd-bcc4-bdec9a732fc5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.49339ms Apr 26 21:27:01.789: INFO: Pod "downwardapi-volume-d9c70193-0ee2-4dbd-bcc4-bdec9a732fc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039366138s Apr 26 21:27:03.793: INFO: Pod "downwardapi-volume-d9c70193-0ee2-4dbd-bcc4-bdec9a732fc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043501983s STEP: Saw pod success Apr 26 21:27:03.793: INFO: Pod "downwardapi-volume-d9c70193-0ee2-4dbd-bcc4-bdec9a732fc5" satisfied condition "success or failure" Apr 26 21:27:03.797: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d9c70193-0ee2-4dbd-bcc4-bdec9a732fc5 container client-container: STEP: delete the pod Apr 26 21:27:03.815: INFO: Waiting for pod downwardapi-volume-d9c70193-0ee2-4dbd-bcc4-bdec9a732fc5 to disappear Apr 26 21:27:03.832: INFO: Pod downwardapi-volume-d9c70193-0ee2-4dbd-bcc4-bdec9a732fc5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:27:03.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7592" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1036,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:27:03.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:27:03.931: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 26 21:27:03.943: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:03.947: INFO: Number of nodes with available pods: 0 Apr 26 21:27:03.947: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:27:04.951: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:04.955: INFO: Number of nodes with available pods: 0 Apr 26 21:27:04.955: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:27:05.952: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:05.956: INFO: Number of nodes with available pods: 0 Apr 26 21:27:05.956: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:27:06.967: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:06.971: INFO: Number of nodes with available pods: 0 Apr 26 21:27:06.971: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:27:07.952: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:07.956: INFO: Number of nodes with available pods: 2 Apr 26 21:27:07.956: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 26 21:27:08.003: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:08.003: INFO: Wrong image for pod: daemon-set-tz56r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:08.021: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:09.024: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:09.024: INFO: Wrong image for pod: daemon-set-tz56r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:09.028: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:10.026: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:10.026: INFO: Wrong image for pod: daemon-set-tz56r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:10.030: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:11.025: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:11.025: INFO: Wrong image for pod: daemon-set-tz56r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:11.025: INFO: Pod daemon-set-tz56r is not available Apr 26 21:27:11.030: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:12.026: INFO: Pod daemon-set-f66fb is not available Apr 26 21:27:12.026: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:12.030: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:13.030: INFO: Pod daemon-set-f66fb is not available Apr 26 21:27:13.030: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:13.078: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:14.026: INFO: Pod daemon-set-f66fb is not available Apr 26 21:27:14.026: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:14.031: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:15.030: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:15.033: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:16.026: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:16.026: INFO: Pod daemon-set-g44qn is not available Apr 26 21:27:16.031: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:17.026: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:17.026: INFO: Pod daemon-set-g44qn is not available Apr 26 21:27:17.030: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:18.026: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:18.026: INFO: Pod daemon-set-g44qn is not available Apr 26 21:27:18.029: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:19.027: INFO: Wrong image for pod: daemon-set-g44qn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 26 21:27:19.027: INFO: Pod daemon-set-g44qn is not available Apr 26 21:27:19.031: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:20.025: INFO: Pod daemon-set-9bngg is not available Apr 26 21:27:20.028: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 26 21:27:20.031: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:20.034: INFO: Number of nodes with available pods: 1 Apr 26 21:27:20.034: INFO: Node jerma-worker2 is running more than one daemon pod Apr 26 21:27:21.038: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:21.041: INFO: Number of nodes with available pods: 1 Apr 26 21:27:21.041: INFO: Node jerma-worker2 is running more than one daemon pod Apr 26 21:27:22.038: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:22.047: INFO: Number of nodes with available pods: 1 Apr 26 21:27:22.047: INFO: Node jerma-worker2 is running more than one daemon pod Apr 26 21:27:23.039: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:27:23.042: INFO: Number of nodes with available pods: 2 Apr 26 21:27:23.042: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3496, will wait for the garbage collector to delete the pods Apr 26 21:27:23.112: INFO: Deleting DaemonSet.extensions daemon-set took: 6.148924ms Apr 26 21:27:23.412: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265839ms Apr 26 21:27:29.517: INFO: Number of nodes with available pods: 0 Apr 26 21:27:29.517: INFO: Number of running nodes: 0, number of available pods: 0 Apr 26 21:27:29.521: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3496/daemonsets","resourceVersion":"11281845"},"items":null} Apr 26 21:27:29.523: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3496/pods","resourceVersion":"11281845"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:27:29.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3496" for this suite. • [SLOW TEST:25.696 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":59,"skipped":1049,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:27:29.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4301.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4301.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4301.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4301.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4301.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4301.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 21:27:35.679: INFO: DNS probes using dns-4301/dns-test-bcae94fb-0052-461d-9b08-1c5260d34173 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:27:35.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4301" for this suite. • [SLOW TEST:6.295 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":60,"skipped":1059,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:27:35.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ca4067ef-b233-4f92-ad8d-df863f20f178 STEP: Creating a pod to test consume secrets Apr 26 21:27:36.220: INFO: Waiting up to 5m0s for pod "pod-secrets-e8bfd837-5e2a-4419-8603-671f3afefbe9" in namespace "secrets-1071" to be "success or failure" Apr 26 21:27:36.227: INFO: Pod "pod-secrets-e8bfd837-5e2a-4419-8603-671f3afefbe9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.797904ms Apr 26 21:27:38.231: INFO: Pod "pod-secrets-e8bfd837-5e2a-4419-8603-671f3afefbe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011513223s Apr 26 21:27:40.235: INFO: Pod "pod-secrets-e8bfd837-5e2a-4419-8603-671f3afefbe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014883806s STEP: Saw pod success Apr 26 21:27:40.235: INFO: Pod "pod-secrets-e8bfd837-5e2a-4419-8603-671f3afefbe9" satisfied condition "success or failure" Apr 26 21:27:40.237: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e8bfd837-5e2a-4419-8603-671f3afefbe9 container secret-volume-test: STEP: delete the pod Apr 26 21:27:40.253: INFO: Waiting for pod pod-secrets-e8bfd837-5e2a-4419-8603-671f3afefbe9 to disappear Apr 26 21:27:40.271: INFO: Pod pod-secrets-e8bfd837-5e2a-4419-8603-671f3afefbe9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:27:40.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1071" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1072,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:27:40.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:27:40.348: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 26 21:27:40.383: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 26 21:27:45.387: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 26 21:27:45.387: INFO: Creating deployment "test-rolling-update-deployment" Apr 26 21:27:45.391: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 26 21:27:45.401: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 26 21:27:47.455: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 26 21:27:47.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533265, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533265, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 21:27:49.462: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 26 21:27:49.471: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7771 /apis/apps/v1/namespaces/deployment-7771/deployments/test-rolling-update-deployment 497ddcd4-ad46-4c53-9d9d-14155df1ab7a 11282057 1 2020-04-26 21:27:45 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034bbb58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-26 21:27:45 +0000 UTC,LastTransitionTime:2020-04-26 21:27:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-04-26 21:27:48 +0000 UTC,LastTransitionTime:2020-04-26 21:27:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 26 21:27:49.477: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-7771 /apis/apps/v1/namespaces/deployment-7771/replicasets/test-rolling-update-deployment-67cf4f6444 e96e031f-e6d6-493b-b96c-fe4b1a7efdaf 11282046 1 2020-04-26 21:27:45 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 497ddcd4-ad46-4c53-9d9d-14155df1ab7a 0xc00210f2a7 0xc00210f2a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00210f448 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 26 21:27:49.477: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 26 21:27:49.477: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7771 /apis/apps/v1/namespaces/deployment-7771/replicasets/test-rolling-update-controller 50ea9538-ef02-4db3-a7a5-501ec05f7124 11282055 2 2020-04-26 21:27:40 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 497ddcd4-ad46-4c53-9d9d-14155df1ab7a 0xc00210efa7 0xc00210efa8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00210f0b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 26 21:27:49.480: INFO: Pod "test-rolling-update-deployment-67cf4f6444-fdgbg" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-fdgbg test-rolling-update-deployment-67cf4f6444- deployment-7771 /api/v1/namespaces/deployment-7771/pods/test-rolling-update-deployment-67cf4f6444-fdgbg bb63320e-35a2-42c0-8491-d2e3620ebfc7 11282045 0 2020-04-26 21:27:45 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 e96e031f-e6d6-493b-b96c-fe4b1a7efdaf 0xc00074d097 0xc00074d098}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xkmxt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xkmxt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xkmxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:27:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:27:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:27:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:27:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.177,StartTime:2020-04-26 21:27:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 21:27:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://0432d12ced3d7ff974617a4f2b8ab470b61fa2a08167bedc293055ebee4dcc06,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:27:49.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7771" for this suite. • [SLOW TEST:9.206 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":62,"skipped":1101,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:27:49.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:27:49.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-879" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1106,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:27:49.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 21:27:50.248: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 21:27:52.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533270, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533270, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533270, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533270, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 21:27:55.290: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:27:55.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4948" for this suite. STEP: Destroying namespace "webhook-4948-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.052 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":64,"skipped":1117,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:27:55.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-c0f8559a-c15e-4b6b-b257-e04232f309c0 STEP: Creating a pod to test consume secrets Apr 26 21:27:55.766: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be2c82f4-e81b-472c-b6b1-69839f399d6c" in namespace "projected-2805" to be "success or failure" Apr 26 21:27:55.770: INFO: Pod "pod-projected-secrets-be2c82f4-e81b-472c-b6b1-69839f399d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.83343ms Apr 26 21:27:57.774: INFO: Pod "pod-projected-secrets-be2c82f4-e81b-472c-b6b1-69839f399d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007516465s Apr 26 21:27:59.778: INFO: Pod "pod-projected-secrets-be2c82f4-e81b-472c-b6b1-69839f399d6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011889038s STEP: Saw pod success Apr 26 21:27:59.778: INFO: Pod "pod-projected-secrets-be2c82f4-e81b-472c-b6b1-69839f399d6c" satisfied condition "success or failure" Apr 26 21:27:59.782: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-be2c82f4-e81b-472c-b6b1-69839f399d6c container projected-secret-volume-test: STEP: delete the pod Apr 26 21:27:59.799: INFO: Waiting for pod pod-projected-secrets-be2c82f4-e81b-472c-b6b1-69839f399d6c to disappear Apr 26 21:27:59.803: INFO: Pod pod-projected-secrets-be2c82f4-e81b-472c-b6b1-69839f399d6c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:27:59.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2805" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1117,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:27:59.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 26 21:27:59.860: INFO: >>> kubeConfig: /root/.kube/config Apr 26 21:28:02.900: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:28:13.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7981" for this suite. • [SLOW TEST:13.844 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":66,"skipped":1122,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:28:13.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:29:13.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5785" for this suite. • [SLOW TEST:60.083 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1129,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:29:13.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 26 21:29:13.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5217' Apr 26 21:29:14.083: INFO: stderr: "" Apr 26 21:29:14.083: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 26 21:29:14.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5217' Apr 26 21:29:14.229: INFO: stderr: "" Apr 26 21:29:14.229: INFO: stdout: "update-demo-nautilus-lgs7r update-demo-nautilus-sqbr6 " Apr 26 21:29:14.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgs7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5217' Apr 26 21:29:14.378: INFO: stderr: "" Apr 26 21:29:14.378: INFO: stdout: "" Apr 26 21:29:14.378: INFO: update-demo-nautilus-lgs7r is created but not running Apr 26 21:29:19.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5217' Apr 26 21:29:19.487: INFO: stderr: "" Apr 26 21:29:19.487: INFO: stdout: "update-demo-nautilus-lgs7r update-demo-nautilus-sqbr6 " Apr 26 21:29:19.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgs7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5217' Apr 26 21:29:19.587: INFO: stderr: "" Apr 26 21:29:19.587: INFO: stdout: "true" Apr 26 21:29:19.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgs7r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5217' Apr 26 21:29:19.741: INFO: stderr: "" Apr 26 21:29:19.741: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 21:29:19.741: INFO: validating pod update-demo-nautilus-lgs7r Apr 26 21:29:19.746: INFO: got data: { "image": "nautilus.jpg" } Apr 26 21:29:19.746: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 21:29:19.746: INFO: update-demo-nautilus-lgs7r is verified up and running Apr 26 21:29:19.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqbr6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5217' Apr 26 21:29:19.832: INFO: stderr: "" Apr 26 21:29:19.832: INFO: stdout: "true" Apr 26 21:29:19.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqbr6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5217' Apr 26 21:29:19.940: INFO: stderr: "" Apr 26 21:29:19.940: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 21:29:19.940: INFO: validating pod update-demo-nautilus-sqbr6 Apr 26 21:29:19.944: INFO: got data: { "image": "nautilus.jpg" } Apr 26 21:29:19.944: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 21:29:19.944: INFO: update-demo-nautilus-sqbr6 is verified up and running STEP: using delete to clean up resources Apr 26 21:29:19.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5217' Apr 26 21:29:20.059: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 21:29:20.059: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 26 21:29:20.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5217' Apr 26 21:29:20.184: INFO: stderr: "No resources found in kubectl-5217 namespace.\n" Apr 26 21:29:20.184: INFO: stdout: "" Apr 26 21:29:20.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5217 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 26 21:29:20.287: INFO: stderr: "" Apr 26 21:29:20.287: INFO: stdout: "update-demo-nautilus-lgs7r\nupdate-demo-nautilus-sqbr6\n" Apr 26 21:29:20.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5217' Apr 26 21:29:20.886: INFO: stderr: "No resources found in kubectl-5217 namespace.\n" Apr 26 21:29:20.886: INFO: stdout: "" Apr 26 21:29:20.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5217 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 26 21:29:20.986: INFO: stderr: "" Apr 26 21:29:20.986: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:29:20.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5217" for this suite. • [SLOW TEST:7.255 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":68,"skipped":1130,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:29:20.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 26 21:29:26.656: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:29:27.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8502" for this suite. • [SLOW TEST:6.709 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":69,"skipped":1150,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:29:27.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 26 21:29:27.757: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 26 21:29:27.807: INFO: Waiting for terminating namespaces to be deleted... Apr 26 21:29:27.810: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 26 21:29:27.820: INFO: update-demo-nautilus-lgs7r from kubectl-5217 started at 2020-04-26 21:29:14 +0000 UTC (1 container statuses recorded) Apr 26 21:29:27.820: INFO: Container update-demo ready: false, restart count 0 Apr 26 21:29:27.820: INFO: pod-adoption-release-vjnbb from replicaset-8502 started at 2020-04-26 21:29:26 +0000 UTC (1 container statuses recorded) Apr 26 21:29:27.820: INFO: Container pod-adoption-release ready: false, restart count 0 Apr 26 21:29:27.820: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:29:27.820: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 21:29:27.820: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:29:27.820: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 21:29:27.820: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 26 21:29:27.839: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 26 21:29:27.839: INFO: Container kube-hunter ready: false, restart count 0 Apr 26 21:29:27.839: INFO: update-demo-nautilus-sqbr6 from kubectl-5217 started at 2020-04-26 21:29:14 +0000 UTC (1 container statuses recorded) Apr 26 21:29:27.839: INFO: Container update-demo ready: false, restart count 0 Apr 26 21:29:27.839: INFO: pod-adoption-release from replicaset-8502 started at 2020-04-26 21:29:21 +0000 UTC (1 container statuses recorded) Apr 26 21:29:27.839: INFO: Container pod-adoption-release ready: true, restart count 0 Apr 26 21:29:27.839: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:29:27.839: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 21:29:27.839: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 26 21:29:27.839: INFO: Container kube-bench ready: false, restart count 0 Apr 26 21:29:27.839: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:29:27.839: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16097be52cd4124a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:29:28.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-990" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":70,"skipped":1160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:29:28.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Apr 26 21:29:29.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8275 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 26 21:29:29.143: INFO: stderr: "" Apr 26 21:29:29.143: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Apr 26 21:29:29.143: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 26 21:29:29.143: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8275" to be "running and ready, or succeeded" Apr 26 21:29:29.164: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 20.672375ms Apr 26 21:29:31.172: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029159808s Apr 26 21:29:33.224: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.080955003s Apr 26 21:29:33.224: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 26 21:29:33.224: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 26 21:29:33.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8275' Apr 26 21:29:33.682: INFO: stderr: "" Apr 26 21:29:33.682: INFO: stdout: "I0426 21:29:31.353889 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/bdvh 317\nI0426 21:29:31.554224 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/fsl 289\nI0426 21:29:31.754252 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/h9s 421\nI0426 21:29:31.954085 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/99hw 472\nI0426 21:29:32.154066 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/4r74 583\nI0426 21:29:32.354252 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/76h 444\nI0426 21:29:32.554091 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/268z 547\nI0426 21:29:32.754136 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/5qps 412\nI0426 21:29:32.954042 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/ptb8 462\nI0426 21:29:33.154066 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/njl 277\nI0426 21:29:33.354118 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/jpd 316\nI0426 21:29:33.554031 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/gs4 556\n" STEP: limiting log lines Apr 26 21:29:33.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8275 --tail=1' Apr 26 21:29:33.826: INFO: stderr: "" Apr 26 21:29:33.826: INFO: stdout: "I0426 21:29:33.754075 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/xr77 426\n" Apr 26 21:29:33.826: INFO: got output "I0426 21:29:33.754075 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/xr77 426\n" STEP: limiting log bytes Apr 26 21:29:33.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8275 --limit-bytes=1' Apr 26 21:29:33.965: INFO: stderr: "" Apr 26 21:29:33.965: INFO: stdout: "I" Apr 26 21:29:33.965: INFO: got output "I" STEP: exposing timestamps Apr 26 21:29:33.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8275 --tail=1 --timestamps' Apr 26 21:29:34.138: INFO: stderr: "" Apr 26 21:29:34.138: INFO: stdout: "2020-04-26T21:29:33.954209249Z I0426 21:29:33.954069 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/kt29 558\n" Apr 26 21:29:34.138: INFO: got output "2020-04-26T21:29:33.954209249Z I0426 21:29:33.954069 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/kt29 558\n" STEP: restricting to a time range Apr 26 21:29:36.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8275 --since=1s' Apr 26 21:29:36.750: INFO: stderr: "" Apr 26 21:29:36.750: INFO: stdout: "I0426 21:29:35.754170 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/rz2 213\nI0426 21:29:35.954053 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/sh6b 321\nI0426 21:29:36.154127 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/cbs 418\nI0426 21:29:36.354040 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/4b9 400\nI0426 21:29:36.554119 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/4vlf 344\n" Apr 26 21:29:36.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8275 --since=24h' Apr 26 21:29:36.880: INFO: stderr: "" Apr 26 21:29:36.880: INFO: stdout: "I0426 21:29:31.353889 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/bdvh 317\nI0426 21:29:31.554224 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/fsl 289\nI0426 21:29:31.754252 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/h9s 421\nI0426 21:29:31.954085 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/99hw 472\nI0426 21:29:32.154066 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/4r74 583\nI0426 21:29:32.354252 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/76h 444\nI0426 21:29:32.554091 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/268z 547\nI0426 21:29:32.754136 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/5qps 412\nI0426 21:29:32.954042 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/ptb8 462\nI0426 21:29:33.154066 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/njl 277\nI0426 21:29:33.354118 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/jpd 316\nI0426 21:29:33.554031 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/gs4 556\nI0426 21:29:33.754075 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/xr77 426\nI0426 21:29:33.954069 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/kt29 558\nI0426 21:29:34.154057 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/z8hz 504\nI0426 21:29:34.354187 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/h6m7 568\nI0426 21:29:34.554100 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/c9tk 270\nI0426 21:29:34.754123 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/9qdn 251\nI0426 21:29:34.954050 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/vmkp 439\nI0426 21:29:35.154118 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/qcb4 247\nI0426 21:29:35.354126 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/rtmr 464\nI0426 21:29:35.554102 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/4c2 437\nI0426 21:29:35.754170 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/rz2 213\nI0426 21:29:35.954053 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/sh6b 321\nI0426 21:29:36.154127 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/cbs 418\nI0426 21:29:36.354040 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/4b9 400\nI0426 21:29:36.554119 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/4vlf 344\nI0426 21:29:36.754087 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/w78b 297\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Apr 26 21:29:36.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8275' Apr 26 21:29:39.682: INFO: stderr: "" Apr 26 21:29:39.682: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:29:39.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8275" for this suite. • [SLOW TEST:10.836 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":71,"skipped":1202,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:29:39.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 26 21:29:39.825: INFO: Waiting up to 5m0s for pod "pod-e5a3e7c7-f6ad-40d0-9157-30edb31b0dad" in namespace "emptydir-9276" to be "success or failure" Apr 26 21:29:39.838: INFO: Pod "pod-e5a3e7c7-f6ad-40d0-9157-30edb31b0dad": Phase="Pending", Reason="", readiness=false. Elapsed: 12.751223ms Apr 26 21:29:41.843: INFO: Pod "pod-e5a3e7c7-f6ad-40d0-9157-30edb31b0dad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017151466s Apr 26 21:29:43.847: INFO: Pod "pod-e5a3e7c7-f6ad-40d0-9157-30edb31b0dad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021391158s STEP: Saw pod success Apr 26 21:29:43.847: INFO: Pod "pod-e5a3e7c7-f6ad-40d0-9157-30edb31b0dad" satisfied condition "success or failure" Apr 26 21:29:43.850: INFO: Trying to get logs from node jerma-worker2 pod pod-e5a3e7c7-f6ad-40d0-9157-30edb31b0dad container test-container: STEP: delete the pod Apr 26 21:29:43.891: INFO: Waiting for pod pod-e5a3e7c7-f6ad-40d0-9157-30edb31b0dad to disappear Apr 26 21:29:43.914: INFO: Pod pod-e5a3e7c7-f6ad-40d0-9157-30edb31b0dad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:29:43.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9276" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1209,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:29:43.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-sskj STEP: Creating a pod to test atomic-volume-subpath Apr 26 21:29:44.024: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-sskj" in namespace "subpath-1246" to be "success or failure" Apr 26 21:29:44.027: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.650113ms Apr 26 21:29:46.031: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00752456s Apr 26 21:29:48.036: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Running", Reason="", readiness=true. Elapsed: 4.011923162s Apr 26 21:29:50.051: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Running", Reason="", readiness=true. Elapsed: 6.026938146s Apr 26 21:29:52.055: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Running", Reason="", readiness=true. Elapsed: 8.031240058s Apr 26 21:29:54.059: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Running", Reason="", readiness=true. Elapsed: 10.035601578s Apr 26 21:29:56.064: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Running", Reason="", readiness=true. Elapsed: 12.039993052s Apr 26 21:29:58.068: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Running", Reason="", readiness=true. Elapsed: 14.044328172s Apr 26 21:30:00.072: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Running", Reason="", readiness=true. Elapsed: 16.04810262s Apr 26 21:30:02.076: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Running", Reason="", readiness=true. Elapsed: 18.052244262s Apr 26 21:30:04.080: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Running", Reason="", readiness=true. Elapsed: 20.056265236s Apr 26 21:30:06.084: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Running", Reason="", readiness=true. Elapsed: 22.060312973s Apr 26 21:30:08.088: INFO: Pod "pod-subpath-test-projected-sskj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.06435232s STEP: Saw pod success Apr 26 21:30:08.088: INFO: Pod "pod-subpath-test-projected-sskj" satisfied condition "success or failure" Apr 26 21:30:08.091: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-sskj container test-container-subpath-projected-sskj: STEP: delete the pod Apr 26 21:30:08.109: INFO: Waiting for pod pod-subpath-test-projected-sskj to disappear Apr 26 21:30:08.113: INFO: Pod pod-subpath-test-projected-sskj no longer exists STEP: Deleting pod pod-subpath-test-projected-sskj Apr 26 21:30:08.113: INFO: Deleting pod "pod-subpath-test-projected-sskj" in namespace "subpath-1246" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:30:08.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1246" for this suite. • [SLOW TEST:24.199 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":73,"skipped":1229,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:30:08.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:30:08.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-948" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":74,"skipped":1242,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:30:08.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:30:08.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1226" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":75,"skipped":1255,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:30:08.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 21:30:08.823: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 21:30:10.835: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533408, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533408, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533408, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533408, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 21:30:13.870: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:30:14.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5855" for this suite. STEP: Destroying namespace "webhook-5855-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.893 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":76,"skipped":1288,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:30:14.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:30:25.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-436" for this suite. • [SLOW TEST:11.146 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":77,"skipped":1288,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:30:25.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Apr 26 21:30:25.394: INFO: Waiting up to 5m0s for pod "var-expansion-f1c458cd-bb4e-4541-adce-84674558050e" in namespace "var-expansion-7424" to be "success or failure" Apr 26 21:30:25.402: INFO: Pod "var-expansion-f1c458cd-bb4e-4541-adce-84674558050e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.274817ms Apr 26 21:30:27.406: INFO: Pod "var-expansion-f1c458cd-bb4e-4541-adce-84674558050e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011750289s Apr 26 21:30:29.411: INFO: Pod "var-expansion-f1c458cd-bb4e-4541-adce-84674558050e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016591773s STEP: Saw pod success Apr 26 21:30:29.411: INFO: Pod "var-expansion-f1c458cd-bb4e-4541-adce-84674558050e" satisfied condition "success or failure" Apr 26 21:30:29.414: INFO: Trying to get logs from node jerma-worker pod var-expansion-f1c458cd-bb4e-4541-adce-84674558050e container dapi-container: STEP: delete the pod Apr 26 21:30:29.446: INFO: Waiting for pod var-expansion-f1c458cd-bb4e-4541-adce-84674558050e to disappear Apr 26 21:30:29.478: INFO: Pod var-expansion-f1c458cd-bb4e-4541-adce-84674558050e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:30:29.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7424" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1298,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:30:29.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5932.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5932.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5932.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 21:30:35.686: INFO: DNS probes using dns-test-4d7dac5e-0e59-48bd-9767-8bcaec12974e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5932.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5932.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5932.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 21:30:41.855: INFO: File jessie_udp@dns-test-service-3.dns-5932.svc.cluster.local from pod dns-5932/dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 26 21:30:41.855: INFO: Lookups using dns-5932/dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 failed for: [jessie_udp@dns-test-service-3.dns-5932.svc.cluster.local] Apr 26 21:30:46.860: INFO: File wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local from pod dns-5932/dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 26 21:30:46.864: INFO: Lookups using dns-5932/dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 failed for: [wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local] Apr 26 21:30:51.859: INFO: File wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local from pod dns-5932/dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 26 21:30:51.862: INFO: Lookups using dns-5932/dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 failed for: [wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local] Apr 26 21:30:56.859: INFO: File wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local from pod dns-5932/dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 26 21:30:56.863: INFO: Lookups using dns-5932/dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 failed for: [wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local] Apr 26 21:31:01.860: INFO: File wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local from pod dns-5932/dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 26 21:31:01.863: INFO: Lookups using dns-5932/dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 failed for: [wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local] Apr 26 21:31:06.863: INFO: DNS probes using dns-test-19af8d6b-5d34-4064-9e96-56b999f43a53 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5932.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5932.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5932.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5932.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 21:31:13.319: INFO: File jessie_udp@dns-test-service-3.dns-5932.svc.cluster.local from pod dns-5932/dns-test-70c8adc3-3ce8-43c1-a4eb-2ad753e648be contains '' instead of '10.97.92.228' Apr 26 21:31:13.319: INFO: Lookups using dns-5932/dns-test-70c8adc3-3ce8-43c1-a4eb-2ad753e648be failed for: [jessie_udp@dns-test-service-3.dns-5932.svc.cluster.local] Apr 26 21:31:18.380: INFO: DNS probes using dns-test-70c8adc3-3ce8-43c1-a4eb-2ad753e648be succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:31:18.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5932" for this suite. • [SLOW TEST:49.559 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":79,"skipped":1318,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:31:19.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-667 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-667 to expose endpoints map[] Apr 26 21:31:19.212: INFO: Get endpoints failed (10.507654ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 26 21:31:20.215: INFO: successfully validated that service multi-endpoint-test in namespace services-667 exposes endpoints map[] (1.014097629s elapsed) STEP: Creating pod pod1 in namespace services-667 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-667 to expose endpoints map[pod1:[100]] Apr 26 21:31:23.318: INFO: successfully validated that service multi-endpoint-test in namespace services-667 exposes endpoints map[pod1:[100]] (3.096044928s elapsed) STEP: Creating pod pod2 in namespace services-667 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-667 to expose endpoints map[pod1:[100] pod2:[101]] Apr 26 21:31:26.469: INFO: successfully validated that service multi-endpoint-test in namespace services-667 exposes endpoints map[pod1:[100] pod2:[101]] (3.144805341s elapsed) STEP: Deleting pod pod1 in namespace services-667 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-667 to expose endpoints map[pod2:[101]] Apr 26 21:31:27.509: INFO: successfully validated that service multi-endpoint-test in namespace services-667 exposes endpoints map[pod2:[101]] (1.036310433s elapsed) STEP: Deleting pod pod2 in namespace services-667 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-667 to expose endpoints map[] Apr 26 21:31:27.523: INFO: successfully validated that service multi-endpoint-test in namespace services-667 exposes endpoints map[] (9.130324ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:31:27.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-667" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:8.533 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":80,"skipped":1334,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:31:27.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 26 21:31:35.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 21:31:35.723: INFO: Pod pod-with-poststart-http-hook still exists Apr 26 21:31:37.724: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 21:31:37.728: INFO: Pod pod-with-poststart-http-hook still exists Apr 26 21:31:39.724: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 21:31:39.728: INFO: Pod pod-with-poststart-http-hook still exists Apr 26 21:31:41.724: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 21:31:41.728: INFO: Pod pod-with-poststart-http-hook still exists Apr 26 21:31:43.724: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 21:31:43.728: INFO: Pod pod-with-poststart-http-hook still exists Apr 26 21:31:45.724: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 21:31:45.728: INFO: Pod pod-with-poststart-http-hook still exists Apr 26 21:31:47.724: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 21:31:47.728: INFO: Pod pod-with-poststart-http-hook still exists Apr 26 21:31:49.724: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 26 21:31:49.727: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:31:49.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3237" for this suite. • [SLOW TEST:22.154 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1338,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:31:49.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:31:54.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-474" for this suite. • [SLOW TEST:5.155 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":82,"skipped":1345,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:31:54.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 26 21:31:55.215: INFO: >>> kubeConfig: /root/.kube/config Apr 26 21:31:58.283: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:32:08.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3971" for this suite. • [SLOW TEST:14.065 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":83,"skipped":1362,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:32:08.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 26 21:32:13.078: INFO: &Pod{ObjectMeta:{send-events-5138b29f-a59e-46f5-aa05-e781e68830d6 events-7201 /api/v1/namespaces/events-7201/pods/send-events-5138b29f-a59e-46f5-aa05-e781e68830d6 d2fbd1e7-a76b-44d2-9b06-d0dcfa6d6082 11283629 0 2020-04-26 21:32:09 +0000 UTC map[name:foo time:35151809] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:32:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:32:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:32:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:32:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.97,StartTime:2020-04-26 21:32:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 21:32:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://e669e74d410c2e501e2693c44404f07bf286648f844cb611e989212264d851db,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 26 21:32:15.082: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 26 21:32:17.086: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:32:17.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7201" for this suite. • [SLOW TEST:8.197 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":84,"skipped":1365,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:32:17.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0426 21:32:47.735718 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 21:32:47.735: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:32:47.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9225" for this suite. • [SLOW TEST:30.592 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":85,"skipped":1370,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:32:47.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-d3c0f6b3-22a6-4306-9f34-20b79d34dae7 STEP: Creating a pod to test consume secrets Apr 26 21:32:48.227: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c151d695-ebba-4e62-a008-630c31ee4fc2" in namespace "projected-1398" to be "success or failure" Apr 26 21:32:48.243: INFO: Pod "pod-projected-secrets-c151d695-ebba-4e62-a008-630c31ee4fc2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.830914ms Apr 26 21:32:50.247: INFO: Pod "pod-projected-secrets-c151d695-ebba-4e62-a008-630c31ee4fc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020262602s Apr 26 21:32:52.250: INFO: Pod "pod-projected-secrets-c151d695-ebba-4e62-a008-630c31ee4fc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023362252s STEP: Saw pod success Apr 26 21:32:52.250: INFO: Pod "pod-projected-secrets-c151d695-ebba-4e62-a008-630c31ee4fc2" satisfied condition "success or failure" Apr 26 21:32:52.252: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-c151d695-ebba-4e62-a008-630c31ee4fc2 container projected-secret-volume-test: STEP: delete the pod Apr 26 21:32:52.380: INFO: Waiting for pod pod-projected-secrets-c151d695-ebba-4e62-a008-630c31ee4fc2 to disappear Apr 26 21:32:52.446: INFO: Pod pod-projected-secrets-c151d695-ebba-4e62-a008-630c31ee4fc2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:32:52.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1398" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1371,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:32:52.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:32:52.549: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.886023ms) Apr 26 21:32:52.553: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.723982ms) Apr 26 21:32:52.556: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.477954ms) Apr 26 21:32:52.560: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.683306ms) Apr 26 21:32:52.563: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.343851ms) Apr 26 21:32:52.566: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.173115ms) Apr 26 21:32:52.570: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.406325ms) Apr 26 21:32:52.573: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.450817ms) Apr 26 21:32:52.577: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.302768ms) Apr 26 21:32:52.580: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.546742ms) Apr 26 21:32:52.584: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.005976ms) Apr 26 21:32:52.588: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.380765ms) Apr 26 21:32:52.628: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 40.533078ms) Apr 26 21:32:52.632: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.581954ms) Apr 26 21:32:52.635: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.452099ms) Apr 26 21:32:52.639: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.819548ms) Apr 26 21:32:52.643: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.482587ms) Apr 26 21:32:52.647: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.693677ms) Apr 26 21:32:52.650: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.816391ms) Apr 26 21:32:52.654: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.731196ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:32:52.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8193" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":87,"skipped":1379,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:32:52.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:05.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6451" for this suite. • [SLOW TEST:13.135 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":88,"skipped":1396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:05.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Apr 26 21:33:05.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 26 21:33:06.060: INFO: stderr: "" Apr 26 21:33:06.060: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:06.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6984" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":89,"skipped":1432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:06.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0426 21:33:17.682526 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 21:33:17.682: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:17.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9158" for this suite. • [SLOW TEST:11.639 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":90,"skipped":1455,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:17.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-9dfb2bb4-968b-4217-8ac3-65eb43343355 STEP: Creating a pod to test consume configMaps Apr 26 21:33:17.796: INFO: Waiting up to 5m0s for pod "pod-configmaps-26a9c454-a45e-4922-96a4-57f1faba80eb" in namespace "configmap-476" to be "success or failure" Apr 26 21:33:17.813: INFO: Pod "pod-configmaps-26a9c454-a45e-4922-96a4-57f1faba80eb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.588249ms Apr 26 21:33:19.874: INFO: Pod "pod-configmaps-26a9c454-a45e-4922-96a4-57f1faba80eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077745063s Apr 26 21:33:21.878: INFO: Pod "pod-configmaps-26a9c454-a45e-4922-96a4-57f1faba80eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081554396s STEP: Saw pod success Apr 26 21:33:21.878: INFO: Pod "pod-configmaps-26a9c454-a45e-4922-96a4-57f1faba80eb" satisfied condition "success or failure" Apr 26 21:33:21.881: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-26a9c454-a45e-4922-96a4-57f1faba80eb container configmap-volume-test: STEP: delete the pod Apr 26 21:33:21.897: INFO: Waiting for pod pod-configmaps-26a9c454-a45e-4922-96a4-57f1faba80eb to disappear Apr 26 21:33:21.902: INFO: Pod pod-configmaps-26a9c454-a45e-4922-96a4-57f1faba80eb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:21.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-476" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1460,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:21.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 21:33:22.687: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 21:33:24.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533602, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533602, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533602, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533602, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 21:33:26.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533602, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533602, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533602, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533602, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 21:33:29.807: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:33:29.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:31.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3646" for this suite. STEP: Destroying namespace "webhook-3646-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.187 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":92,"skipped":1478,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:31.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:33:31.144: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:31.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7269" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":93,"skipped":1498,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:31.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:36.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6146" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1506,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:36.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Apr 26 21:33:36.135: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:36.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-887" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":95,"skipped":1519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:36.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 26 21:33:36.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-7619' Apr 26 21:33:38.876: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 26 21:33:38.876: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Apr 26 21:33:42.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7619' Apr 26 21:33:43.024: INFO: stderr: "" Apr 26 21:33:43.024: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:43.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7619" for this suite. • [SLOW TEST:6.789 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":96,"skipped":1553,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:43.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c7c53cca-a9fd-434e-b7ea-25027eecb087 STEP: Creating a pod to test consume configMaps Apr 26 21:33:43.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc68344a-a754-421f-a526-7c6737db8906" in namespace "configmap-2551" to be "success or failure" Apr 26 21:33:43.144: INFO: Pod "pod-configmaps-cc68344a-a754-421f-a526-7c6737db8906": Phase="Pending", Reason="", readiness=false. Elapsed: 39.940752ms Apr 26 21:33:45.150: INFO: Pod "pod-configmaps-cc68344a-a754-421f-a526-7c6737db8906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046541592s Apr 26 21:33:47.154: INFO: Pod "pod-configmaps-cc68344a-a754-421f-a526-7c6737db8906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050367906s STEP: Saw pod success Apr 26 21:33:47.154: INFO: Pod "pod-configmaps-cc68344a-a754-421f-a526-7c6737db8906" satisfied condition "success or failure" Apr 26 21:33:47.157: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-cc68344a-a754-421f-a526-7c6737db8906 container configmap-volume-test: STEP: delete the pod Apr 26 21:33:47.174: INFO: Waiting for pod pod-configmaps-cc68344a-a754-421f-a526-7c6737db8906 to disappear Apr 26 21:33:47.179: INFO: Pod pod-configmaps-cc68344a-a754-421f-a526-7c6737db8906 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:47.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2551" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1577,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:47.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-dd07d97c-9e61-4413-b850-efd76398acb1 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:47.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-193" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":98,"skipped":1592,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:47.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 26 21:33:47.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9492' Apr 26 21:33:47.520: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 26 21:33:47.520: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Apr 26 21:33:47.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-9492' Apr 26 21:33:47.648: INFO: stderr: "" Apr 26 21:33:47.648: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:47.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9492" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":99,"skipped":1594,"failed":0} ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:47.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Apr 26 21:33:47.716: INFO: Waiting up to 5m0s for pod "var-expansion-942c4d0f-25e2-40e8-9661-1628fb132b3a" in namespace "var-expansion-1470" to be "success or failure" Apr 26 21:33:47.761: INFO: Pod "var-expansion-942c4d0f-25e2-40e8-9661-1628fb132b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 44.758387ms Apr 26 21:33:49.841: INFO: Pod "var-expansion-942c4d0f-25e2-40e8-9661-1628fb132b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12490577s Apr 26 21:33:51.846: INFO: Pod "var-expansion-942c4d0f-25e2-40e8-9661-1628fb132b3a": Phase="Running", Reason="", readiness=true. Elapsed: 4.130104891s Apr 26 21:33:53.850: INFO: Pod "var-expansion-942c4d0f-25e2-40e8-9661-1628fb132b3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134251186s STEP: Saw pod success Apr 26 21:33:53.850: INFO: Pod "var-expansion-942c4d0f-25e2-40e8-9661-1628fb132b3a" satisfied condition "success or failure" Apr 26 21:33:53.853: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-942c4d0f-25e2-40e8-9661-1628fb132b3a container dapi-container: STEP: delete the pod Apr 26 21:33:53.930: INFO: Waiting for pod var-expansion-942c4d0f-25e2-40e8-9661-1628fb132b3a to disappear Apr 26 21:33:53.940: INFO: Pod var-expansion-942c4d0f-25e2-40e8-9661-1628fb132b3a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:53.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1470" for this suite. • [SLOW TEST:6.289 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1594,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:53.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:53.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-438" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":101,"skipped":1675,"failed":0} ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:54.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-3758/configmap-test-88b58f86-729a-43f8-9319-0c98052971fc STEP: Creating a pod to test consume configMaps Apr 26 21:33:54.097: INFO: Waiting up to 5m0s for pod "pod-configmaps-45609532-73cc-4156-bb49-ff794dc81bd0" in namespace "configmap-3758" to be "success or failure" Apr 26 21:33:54.102: INFO: Pod "pod-configmaps-45609532-73cc-4156-bb49-ff794dc81bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225111ms Apr 26 21:33:56.120: INFO: Pod "pod-configmaps-45609532-73cc-4156-bb49-ff794dc81bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022322009s Apr 26 21:33:58.124: INFO: Pod "pod-configmaps-45609532-73cc-4156-bb49-ff794dc81bd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026334095s STEP: Saw pod success Apr 26 21:33:58.124: INFO: Pod "pod-configmaps-45609532-73cc-4156-bb49-ff794dc81bd0" satisfied condition "success or failure" Apr 26 21:33:58.127: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-45609532-73cc-4156-bb49-ff794dc81bd0 container env-test: STEP: delete the pod Apr 26 21:33:58.162: INFO: Waiting for pod pod-configmaps-45609532-73cc-4156-bb49-ff794dc81bd0 to disappear Apr 26 21:33:58.185: INFO: Pod pod-configmaps-45609532-73cc-4156-bb49-ff794dc81bd0 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:33:58.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3758" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1675,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:33:58.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Apr 26 21:34:02.815: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9923 pod-service-account-bd357f9c-12d6-41ce-8a90-9b840ae38866 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 26 21:34:03.032: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9923 pod-service-account-bd357f9c-12d6-41ce-8a90-9b840ae38866 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 26 21:34:03.251: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9923 pod-service-account-bd357f9c-12d6-41ce-8a90-9b840ae38866 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:34:03.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9923" for this suite. • [SLOW TEST:5.312 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":103,"skipped":1695,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:34:03.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:34:03.583: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:34:04.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1712" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":104,"skipped":1717,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:34:04.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:34:04.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d1bd485-6504-4598-9750-8a4057f68339" in namespace "projected-8847" to be "success or failure" Apr 26 21:34:04.311: INFO: Pod "downwardapi-volume-5d1bd485-6504-4598-9750-8a4057f68339": Phase="Pending", Reason="", readiness=false. Elapsed: 15.621213ms Apr 26 21:34:06.324: INFO: Pod "downwardapi-volume-5d1bd485-6504-4598-9750-8a4057f68339": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028160726s Apr 26 21:34:08.328: INFO: Pod "downwardapi-volume-5d1bd485-6504-4598-9750-8a4057f68339": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032655028s STEP: Saw pod success Apr 26 21:34:08.328: INFO: Pod "downwardapi-volume-5d1bd485-6504-4598-9750-8a4057f68339" satisfied condition "success or failure" Apr 26 21:34:08.331: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5d1bd485-6504-4598-9750-8a4057f68339 container client-container: STEP: delete the pod Apr 26 21:34:08.362: INFO: Waiting for pod downwardapi-volume-5d1bd485-6504-4598-9750-8a4057f68339 to disappear Apr 26 21:34:08.377: INFO: Pod downwardapi-volume-5d1bd485-6504-4598-9750-8a4057f68339 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:34:08.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8847" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1738,"failed":0} SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:34:08.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:34:08.457: INFO: Creating deployment "test-recreate-deployment" Apr 26 21:34:08.461: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 26 21:34:08.503: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 26 21:34:10.515: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 26 21:34:10.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533648, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533648, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533648, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533648, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 21:34:12.520: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 26 21:34:12.526: INFO: Updating deployment test-recreate-deployment Apr 26 21:34:12.526: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 26 21:34:12.861: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7159 /apis/apps/v1/namespaces/deployment-7159/deployments/test-recreate-deployment fc3da183-37c0-44a6-a442-c3b7d2aa1832 11284729 2 2020-04-26 21:34:08 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c62bc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-26 21:34:12 +0000 UTC,LastTransitionTime:2020-04-26 21:34:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-26 21:34:12 +0000 UTC,LastTransitionTime:2020-04-26 21:34:08 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 26 21:34:12.888: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-7159 /apis/apps/v1/namespaces/deployment-7159/replicasets/test-recreate-deployment-5f94c574ff e46b08d5-4c72-4e1a-894c-728ff75a0c27 11284727 1 2020-04-26 21:34:12 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment fc3da183-37c0-44a6-a442-c3b7d2aa1832 0xc0041ae4a7 0xc0041ae4a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041ae508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 26 21:34:12.888: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 26 21:34:12.888: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-7159 /apis/apps/v1/namespaces/deployment-7159/replicasets/test-recreate-deployment-799c574856 f1f07506-fc50-4f32-981b-bd1f88ce7061 11284718 2 2020-04-26 21:34:08 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment fc3da183-37c0-44a6-a442-c3b7d2aa1832 0xc0041ae577 0xc0041ae578}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041ae5e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 26 21:34:13.026: INFO: Pod "test-recreate-deployment-5f94c574ff-clgdz" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-clgdz test-recreate-deployment-5f94c574ff- deployment-7159 /api/v1/namespaces/deployment-7159/pods/test-recreate-deployment-5f94c574ff-clgdz 72ac52a5-400f-416a-8441-c16571990e6b 11284731 0 2020-04-26 21:34:12 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff e46b08d5-4c72-4e1a-894c-728ff75a0c27 0xc002c63077 0xc002c63078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z5qch,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z5qch,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z5qch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:34:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:34:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:34:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-26 21:34:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:34:13.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7159" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":106,"skipped":1741,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:34:13.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 26 21:34:13.111: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 26 21:34:13.172: INFO: Waiting for terminating namespaces to be deleted... Apr 26 21:34:13.175: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 26 21:34:13.179: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:34:13.179: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 21:34:13.179: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:34:13.179: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 21:34:13.179: INFO: test-recreate-deployment-5f94c574ff-clgdz from deployment-7159 started at 2020-04-26 21:34:12 +0000 UTC (1 container statuses recorded) Apr 26 21:34:13.179: INFO: Container httpd ready: false, restart count 0 Apr 26 21:34:13.179: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 26 21:34:13.183: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:34:13.183: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 21:34:13.183: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 26 21:34:13.183: INFO: Container kube-bench ready: false, restart count 0 Apr 26 21:34:13.183: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:34:13.183: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 21:34:13.183: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 26 21:34:13.183: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6b8f2904-87c9-4474-99bd-20ea572e96c4 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-6b8f2904-87c9-4474-99bd-20ea572e96c4 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-6b8f2904-87c9-4474-99bd-20ea572e96c4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:39:21.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1646" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.720 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":107,"skipped":1745,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:39:21.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Apr 26 21:39:21.832: INFO: Waiting up to 5m0s for pod "client-containers-991baa12-bf41-408b-b590-1a7134c5b7f7" in namespace "containers-3735" to be "success or failure" Apr 26 21:39:21.836: INFO: Pod "client-containers-991baa12-bf41-408b-b590-1a7134c5b7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.912789ms Apr 26 21:39:23.840: INFO: Pod "client-containers-991baa12-bf41-408b-b590-1a7134c5b7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008067863s Apr 26 21:39:25.845: INFO: Pod "client-containers-991baa12-bf41-408b-b590-1a7134c5b7f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012506039s STEP: Saw pod success Apr 26 21:39:25.845: INFO: Pod "client-containers-991baa12-bf41-408b-b590-1a7134c5b7f7" satisfied condition "success or failure" Apr 26 21:39:25.848: INFO: Trying to get logs from node jerma-worker pod client-containers-991baa12-bf41-408b-b590-1a7134c5b7f7 container test-container: STEP: delete the pod Apr 26 21:39:25.880: INFO: Waiting for pod client-containers-991baa12-bf41-408b-b590-1a7134c5b7f7 to disappear Apr 26 21:39:25.884: INFO: Pod client-containers-991baa12-bf41-408b-b590-1a7134c5b7f7 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:39:25.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3735" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1747,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:39:25.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 26 21:39:26.013: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:26.016: INFO: Number of nodes with available pods: 0 Apr 26 21:39:26.016: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:27.060: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:27.063: INFO: Number of nodes with available pods: 0 Apr 26 21:39:27.063: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:28.020: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:28.023: INFO: Number of nodes with available pods: 0 Apr 26 21:39:28.023: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:29.026: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:29.049: INFO: Number of nodes with available pods: 0 Apr 26 21:39:29.049: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:30.021: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:30.025: INFO: Number of nodes with available pods: 1 Apr 26 21:39:30.025: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:31.023: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:31.026: INFO: Number of nodes with available pods: 2 Apr 26 21:39:31.026: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 26 21:39:31.048: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:31.054: INFO: Number of nodes with available pods: 1 Apr 26 21:39:31.054: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:32.062: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:32.065: INFO: Number of nodes with available pods: 1 Apr 26 21:39:32.065: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:33.066: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:33.107: INFO: Number of nodes with available pods: 1 Apr 26 21:39:33.107: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:34.062: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:34.066: INFO: Number of nodes with available pods: 1 Apr 26 21:39:34.066: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:35.059: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:35.063: INFO: Number of nodes with available pods: 1 Apr 26 21:39:35.063: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:36.060: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:36.064: INFO: Number of nodes with available pods: 1 Apr 26 21:39:36.064: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:37.060: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:37.063: INFO: Number of nodes with available pods: 1 Apr 26 21:39:37.063: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:38.060: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:38.063: INFO: Number of nodes with available pods: 1 Apr 26 21:39:38.063: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:39.060: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:39.063: INFO: Number of nodes with available pods: 1 Apr 26 21:39:39.063: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:40.059: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:40.061: INFO: Number of nodes with available pods: 1 Apr 26 21:39:40.062: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:41.072: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:41.076: INFO: Number of nodes with available pods: 1 Apr 26 21:39:41.076: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:42.058: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:42.061: INFO: Number of nodes with available pods: 1 Apr 26 21:39:42.061: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:39:43.060: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:39:43.063: INFO: Number of nodes with available pods: 2 Apr 26 21:39:43.063: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6469, will wait for the garbage collector to delete the pods Apr 26 21:39:43.126: INFO: Deleting DaemonSet.extensions daemon-set took: 6.60838ms Apr 26 21:39:43.427: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.235598ms Apr 26 21:39:49.330: INFO: Number of nodes with available pods: 0 Apr 26 21:39:49.330: INFO: Number of running nodes: 0, number of available pods: 0 Apr 26 21:39:49.334: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6469/daemonsets","resourceVersion":"11285878"},"items":null} Apr 26 21:39:49.336: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6469/pods","resourceVersion":"11285878"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:39:49.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6469" for this suite. • [SLOW TEST:23.473 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":109,"skipped":1751,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:39:49.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:39:53.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1845" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1811,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:39:53.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:39:53.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a8123fb-a13b-401c-bbec-64b649bedc7f" in namespace "downward-api-2348" to be "success or failure" Apr 26 21:39:53.585: INFO: Pod "downwardapi-volume-9a8123fb-a13b-401c-bbec-64b649bedc7f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.443255ms Apr 26 21:39:55.628: INFO: Pod "downwardapi-volume-9a8123fb-a13b-401c-bbec-64b649bedc7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046310856s Apr 26 21:39:57.633: INFO: Pod "downwardapi-volume-9a8123fb-a13b-401c-bbec-64b649bedc7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051423683s STEP: Saw pod success Apr 26 21:39:57.634: INFO: Pod "downwardapi-volume-9a8123fb-a13b-401c-bbec-64b649bedc7f" satisfied condition "success or failure" Apr 26 21:39:57.636: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9a8123fb-a13b-401c-bbec-64b649bedc7f container client-container: STEP: delete the pod Apr 26 21:39:57.690: INFO: Waiting for pod downwardapi-volume-9a8123fb-a13b-401c-bbec-64b649bedc7f to disappear Apr 26 21:39:57.726: INFO: Pod downwardapi-volume-9a8123fb-a13b-401c-bbec-64b649bedc7f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:39:57.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2348" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1814,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:39:57.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 26 21:39:57.806: INFO: PodSpec: initContainers in spec.initContainers Apr 26 21:40:47.307: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-53ee5339-3ee4-4528-b22b-500465905f2f", GenerateName:"", Namespace:"init-container-7840", SelfLink:"/api/v1/namespaces/init-container-7840/pods/pod-init-53ee5339-3ee4-4528-b22b-500465905f2f", UID:"ec7fa52d-5008-4d65-a06a-b719a5607dd9", ResourceVersion:"11286153", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723533997, loc:(*time.Location)(0x78ee080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"806896021"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-djs49", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0048d6100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-djs49", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-djs49", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-djs49", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ea67e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025ee180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ea6b30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ea6b50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001ea6b58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001ea6b5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533997, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533997, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533997, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723533997, loc:(*time.Location)(0x78ee080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.210", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.210"}}, StartTime:(*v1.Time)(0xc0030e8160), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00298e310)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00298e380)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://0fba8f05166c1b7bac5333432f57410c617e3c7cec854ed8b2c58d46ea65952f", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030e81a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030e8180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001ea6bdf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:40:47.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7840" for this suite. • [SLOW TEST:49.581 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":112,"skipped":1826,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:40:47.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Apr 26 21:40:47.450: INFO: Waiting up to 5m0s for pod "client-containers-70b8af83-6291-4914-aaa7-bc1616ddb543" in namespace "containers-5576" to be "success or failure" Apr 26 21:40:47.457: INFO: Pod "client-containers-70b8af83-6291-4914-aaa7-bc1616ddb543": Phase="Pending", Reason="", readiness=false. Elapsed: 7.354403ms Apr 26 21:40:49.462: INFO: Pod "client-containers-70b8af83-6291-4914-aaa7-bc1616ddb543": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011577718s Apr 26 21:40:51.466: INFO: Pod "client-containers-70b8af83-6291-4914-aaa7-bc1616ddb543": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016021712s STEP: Saw pod success Apr 26 21:40:51.466: INFO: Pod "client-containers-70b8af83-6291-4914-aaa7-bc1616ddb543" satisfied condition "success or failure" Apr 26 21:40:51.469: INFO: Trying to get logs from node jerma-worker2 pod client-containers-70b8af83-6291-4914-aaa7-bc1616ddb543 container test-container: STEP: delete the pod Apr 26 21:40:51.489: INFO: Waiting for pod client-containers-70b8af83-6291-4914-aaa7-bc1616ddb543 to disappear Apr 26 21:40:51.493: INFO: Pod client-containers-70b8af83-6291-4914-aaa7-bc1616ddb543 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:40:51.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5576" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1830,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:40:51.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-sd2p STEP: Creating a pod to test atomic-volume-subpath Apr 26 21:40:51.591: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-sd2p" in namespace "subpath-506" to be "success or failure" Apr 26 21:40:51.606: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Pending", Reason="", readiness=false. Elapsed: 14.807204ms Apr 26 21:40:53.610: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019049776s Apr 26 21:40:55.615: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Running", Reason="", readiness=true. Elapsed: 4.023242568s Apr 26 21:40:57.619: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Running", Reason="", readiness=true. Elapsed: 6.027900982s Apr 26 21:40:59.623: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Running", Reason="", readiness=true. Elapsed: 8.032052012s Apr 26 21:41:01.628: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Running", Reason="", readiness=true. Elapsed: 10.036201866s Apr 26 21:41:03.632: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Running", Reason="", readiness=true. Elapsed: 12.040547794s Apr 26 21:41:05.636: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Running", Reason="", readiness=true. Elapsed: 14.044844118s Apr 26 21:41:07.640: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Running", Reason="", readiness=true. Elapsed: 16.049081408s Apr 26 21:41:09.645: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Running", Reason="", readiness=true. Elapsed: 18.053568585s Apr 26 21:41:11.648: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Running", Reason="", readiness=true. Elapsed: 20.057126588s Apr 26 21:41:13.653: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Running", Reason="", readiness=true. Elapsed: 22.061174599s Apr 26 21:41:15.657: INFO: Pod "pod-subpath-test-downwardapi-sd2p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.065567163s STEP: Saw pod success Apr 26 21:41:15.657: INFO: Pod "pod-subpath-test-downwardapi-sd2p" satisfied condition "success or failure" Apr 26 21:41:15.660: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-sd2p container test-container-subpath-downwardapi-sd2p: STEP: delete the pod Apr 26 21:41:15.678: INFO: Waiting for pod pod-subpath-test-downwardapi-sd2p to disappear Apr 26 21:41:15.682: INFO: Pod pod-subpath-test-downwardapi-sd2p no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-sd2p Apr 26 21:41:15.683: INFO: Deleting pod "pod-subpath-test-downwardapi-sd2p" in namespace "subpath-506" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:41:15.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-506" for this suite. • [SLOW TEST:24.190 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":114,"skipped":1836,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:41:15.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 21:41:16.465: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 21:41:18.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534076, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534076, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534076, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534076, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 21:41:20.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534076, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534076, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534076, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534076, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 21:41:23.519: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:41:35.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2044" for this suite. STEP: Destroying namespace "webhook-2044-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.063 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":115,"skipped":1838,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:41:35.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:41:35.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1910" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":116,"skipped":1842,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:41:35.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-4cfc1d32-194c-4d76-be02-6d86b44557be in namespace container-probe-370 Apr 26 21:41:39.991: INFO: Started pod busybox-4cfc1d32-194c-4d76-be02-6d86b44557be in namespace container-probe-370 STEP: checking the pod's current state and verifying that restartCount is present Apr 26 21:41:39.994: INFO: Initial restart count of pod busybox-4cfc1d32-194c-4d76-be02-6d86b44557be is 0 Apr 26 21:42:28.118: INFO: Restart count of pod container-probe-370/busybox-4cfc1d32-194c-4d76-be02-6d86b44557be is now 1 (48.123438091s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:42:28.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-370" for this suite. • [SLOW TEST:52.306 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1857,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:42:28.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-780.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-780.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-780.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-780.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-780.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-780.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-780.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-780.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-780.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-780.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-780.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.114.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.114.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.114.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.114.170_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-780.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-780.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-780.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-780.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-780.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-780.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-780.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-780.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-780.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-780.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-780.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.114.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.114.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.114.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.114.170_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 21:42:34.584: INFO: Unable to read wheezy_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:34.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:34.590: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:34.592: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:34.614: INFO: Unable to read jessie_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:34.623: INFO: Unable to read jessie_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:34.648: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:34.651: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:34.671: INFO: Lookups using dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5 failed for: [wheezy_udp@dns-test-service.dns-780.svc.cluster.local wheezy_tcp@dns-test-service.dns-780.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_udp@dns-test-service.dns-780.svc.cluster.local jessie_tcp@dns-test-service.dns-780.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local] Apr 26 21:42:39.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:39.680: INFO: Unable to read wheezy_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:39.683: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:39.685: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:39.702: INFO: Unable to read jessie_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:39.705: INFO: Unable to read jessie_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:39.708: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:39.711: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:39.729: INFO: Lookups using dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5 failed for: [wheezy_udp@dns-test-service.dns-780.svc.cluster.local wheezy_tcp@dns-test-service.dns-780.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_udp@dns-test-service.dns-780.svc.cluster.local jessie_tcp@dns-test-service.dns-780.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local] Apr 26 21:42:44.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:44.681: INFO: Unable to read wheezy_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:44.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:44.688: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:44.709: INFO: Unable to read jessie_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:44.712: INFO: Unable to read jessie_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:44.715: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:44.718: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:44.737: INFO: Lookups using dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5 failed for: [wheezy_udp@dns-test-service.dns-780.svc.cluster.local wheezy_tcp@dns-test-service.dns-780.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_udp@dns-test-service.dns-780.svc.cluster.local jessie_tcp@dns-test-service.dns-780.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local] Apr 26 21:42:49.676: INFO: Unable to read wheezy_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:49.680: INFO: Unable to read wheezy_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:49.684: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:49.687: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:49.710: INFO: Unable to read jessie_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:49.712: INFO: Unable to read jessie_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:49.715: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:49.717: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:49.733: INFO: Lookups using dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5 failed for: [wheezy_udp@dns-test-service.dns-780.svc.cluster.local wheezy_tcp@dns-test-service.dns-780.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_udp@dns-test-service.dns-780.svc.cluster.local jessie_tcp@dns-test-service.dns-780.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local] Apr 26 21:42:54.675: INFO: Unable to read wheezy_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:54.678: INFO: Unable to read wheezy_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:54.680: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:54.682: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:54.700: INFO: Unable to read jessie_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:54.703: INFO: Unable to read jessie_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:54.705: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:54.708: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:54.724: INFO: Lookups using dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5 failed for: [wheezy_udp@dns-test-service.dns-780.svc.cluster.local wheezy_tcp@dns-test-service.dns-780.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_udp@dns-test-service.dns-780.svc.cluster.local jessie_tcp@dns-test-service.dns-780.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local] Apr 26 21:42:59.676: INFO: Unable to read wheezy_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:59.679: INFO: Unable to read wheezy_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:59.682: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:59.685: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:59.706: INFO: Unable to read jessie_udp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:59.709: INFO: Unable to read jessie_tcp@dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:59.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:59.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local from pod dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5: the server could not find the requested resource (get pods dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5) Apr 26 21:42:59.730: INFO: Lookups using dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5 failed for: [wheezy_udp@dns-test-service.dns-780.svc.cluster.local wheezy_tcp@dns-test-service.dns-780.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_udp@dns-test-service.dns-780.svc.cluster.local jessie_tcp@dns-test-service.dns-780.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-780.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-780.svc.cluster.local] Apr 26 21:43:04.737: INFO: DNS probes using dns-780/dns-test-c5fa3bca-262e-429d-aa55-f677a55208d5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:43:05.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-780" for this suite. • [SLOW TEST:37.117 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":118,"skipped":1865,"failed":0} SSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:43:05.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:43:09.678: INFO: Waiting up to 5m0s for pod "client-envvars-f94f5b01-2609-4588-b55a-85ac9ab6b2db" in namespace "pods-9921" to be "success or failure" Apr 26 21:43:09.683: INFO: Pod "client-envvars-f94f5b01-2609-4588-b55a-85ac9ab6b2db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.994458ms Apr 26 21:43:11.686: INFO: Pod "client-envvars-f94f5b01-2609-4588-b55a-85ac9ab6b2db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008163803s Apr 26 21:43:13.691: INFO: Pod "client-envvars-f94f5b01-2609-4588-b55a-85ac9ab6b2db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013479618s STEP: Saw pod success Apr 26 21:43:13.691: INFO: Pod "client-envvars-f94f5b01-2609-4588-b55a-85ac9ab6b2db" satisfied condition "success or failure" Apr 26 21:43:13.694: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-f94f5b01-2609-4588-b55a-85ac9ab6b2db container env3cont: STEP: delete the pod Apr 26 21:43:13.728: INFO: Waiting for pod client-envvars-f94f5b01-2609-4588-b55a-85ac9ab6b2db to disappear Apr 26 21:43:13.734: INFO: Pod client-envvars-f94f5b01-2609-4588-b55a-85ac9ab6b2db no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:43:13.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9921" for this suite. • [SLOW TEST:8.415 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1869,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:43:13.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 26 21:43:18.394: INFO: Successfully updated pod "pod-update-47d016b4-0883-405e-bb01-584d9fa1136e" STEP: verifying the updated pod is in kubernetes Apr 26 21:43:18.416: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:43:18.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7223" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1871,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:43:18.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:43:18.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8162d8a-016c-4285-8c0c-4df4783b3907" in namespace "downward-api-9332" to be "success or failure" Apr 26 21:43:18.536: INFO: Pod "downwardapi-volume-a8162d8a-016c-4285-8c0c-4df4783b3907": Phase="Pending", Reason="", readiness=false. Elapsed: 21.474771ms Apr 26 21:43:20.540: INFO: Pod "downwardapi-volume-a8162d8a-016c-4285-8c0c-4df4783b3907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025571235s Apr 26 21:43:22.559: INFO: Pod "downwardapi-volume-a8162d8a-016c-4285-8c0c-4df4783b3907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045228934s STEP: Saw pod success Apr 26 21:43:22.559: INFO: Pod "downwardapi-volume-a8162d8a-016c-4285-8c0c-4df4783b3907" satisfied condition "success or failure" Apr 26 21:43:22.562: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a8162d8a-016c-4285-8c0c-4df4783b3907 container client-container: STEP: delete the pod Apr 26 21:43:22.603: INFO: Waiting for pod downwardapi-volume-a8162d8a-016c-4285-8c0c-4df4783b3907 to disappear Apr 26 21:43:22.617: INFO: Pod downwardapi-volume-a8162d8a-016c-4285-8c0c-4df4783b3907 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:43:22.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9332" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:43:22.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3774 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3774 STEP: creating replication controller externalsvc in namespace services-3774 I0426 21:43:22.823008 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3774, replica count: 2 I0426 21:43:25.873536 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 21:43:28.873736 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 26 21:43:28.919: INFO: Creating new exec pod Apr 26 21:43:32.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3774 execpodxnzkz -- /bin/sh -x -c nslookup clusterip-service' Apr 26 21:43:33.407: INFO: stderr: "I0426 21:43:33.288357 1785 log.go:172] (0xc000a08420) (0xc000b88320) Create stream\nI0426 21:43:33.288406 1785 log.go:172] (0xc000a08420) (0xc000b88320) Stream added, broadcasting: 1\nI0426 21:43:33.292926 1785 log.go:172] (0xc000a08420) Reply frame received for 1\nI0426 21:43:33.292983 1785 log.go:172] (0xc000a08420) (0xc0005cc780) Create stream\nI0426 21:43:33.292996 1785 log.go:172] (0xc000a08420) (0xc0005cc780) Stream added, broadcasting: 3\nI0426 21:43:33.293929 1785 log.go:172] (0xc000a08420) Reply frame received for 3\nI0426 21:43:33.293970 1785 log.go:172] (0xc000a08420) (0xc000299540) Create stream\nI0426 21:43:33.293990 1785 log.go:172] (0xc000a08420) (0xc000299540) Stream added, broadcasting: 5\nI0426 21:43:33.294932 1785 log.go:172] (0xc000a08420) Reply frame received for 5\nI0426 21:43:33.390766 1785 log.go:172] (0xc000a08420) Data frame received for 5\nI0426 21:43:33.390802 1785 log.go:172] (0xc000299540) (5) Data frame handling\nI0426 21:43:33.390828 1785 log.go:172] (0xc000299540) (5) Data frame sent\n+ nslookup clusterip-service\nI0426 21:43:33.398275 1785 log.go:172] (0xc000a08420) Data frame received for 3\nI0426 21:43:33.398295 1785 log.go:172] (0xc0005cc780) (3) Data frame handling\nI0426 21:43:33.398312 1785 log.go:172] (0xc0005cc780) (3) Data frame sent\nI0426 21:43:33.399144 1785 log.go:172] (0xc000a08420) Data frame received for 3\nI0426 21:43:33.399162 1785 log.go:172] (0xc0005cc780) (3) Data frame handling\nI0426 21:43:33.399173 1785 log.go:172] (0xc0005cc780) (3) Data frame sent\nI0426 21:43:33.399673 1785 log.go:172] (0xc000a08420) Data frame received for 5\nI0426 21:43:33.399707 1785 log.go:172] (0xc000299540) (5) Data frame handling\nI0426 21:43:33.399735 1785 log.go:172] (0xc000a08420) Data frame received for 3\nI0426 21:43:33.399748 1785 log.go:172] (0xc0005cc780) (3) Data frame handling\nI0426 21:43:33.401404 1785 log.go:172] (0xc000a08420) Data frame received for 1\nI0426 21:43:33.401430 1785 log.go:172] (0xc000b88320) (1) Data frame handling\nI0426 21:43:33.401449 1785 log.go:172] (0xc000b88320) (1) Data frame sent\nI0426 21:43:33.401461 1785 log.go:172] (0xc000a08420) (0xc000b88320) Stream removed, broadcasting: 1\nI0426 21:43:33.401524 1785 log.go:172] (0xc000a08420) Go away received\nI0426 21:43:33.401795 1785 log.go:172] (0xc000a08420) (0xc000b88320) Stream removed, broadcasting: 1\nI0426 21:43:33.401818 1785 log.go:172] (0xc000a08420) (0xc0005cc780) Stream removed, broadcasting: 3\nI0426 21:43:33.401832 1785 log.go:172] (0xc000a08420) (0xc000299540) Stream removed, broadcasting: 5\n" Apr 26 21:43:33.407: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3774.svc.cluster.local\tcanonical name = externalsvc.services-3774.svc.cluster.local.\nName:\texternalsvc.services-3774.svc.cluster.local\nAddress: 10.99.193.109\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3774, will wait for the garbage collector to delete the pods Apr 26 21:43:33.467: INFO: Deleting ReplicationController externalsvc took: 6.942471ms Apr 26 21:43:33.768: INFO: Terminating ReplicationController externalsvc pods took: 300.217502ms Apr 26 21:43:49.584: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:43:49.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3774" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.983 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":122,"skipped":1973,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:43:49.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:43:49.670: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fee90615-9a18-4e5a-980b-6d615b7dd5c4" in namespace "downward-api-6309" to be "success or failure" Apr 26 21:43:49.672: INFO: Pod "downwardapi-volume-fee90615-9a18-4e5a-980b-6d615b7dd5c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179394ms Apr 26 21:43:51.919: INFO: Pod "downwardapi-volume-fee90615-9a18-4e5a-980b-6d615b7dd5c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248806885s Apr 26 21:43:53.923: INFO: Pod "downwardapi-volume-fee90615-9a18-4e5a-980b-6d615b7dd5c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.253215881s STEP: Saw pod success Apr 26 21:43:53.923: INFO: Pod "downwardapi-volume-fee90615-9a18-4e5a-980b-6d615b7dd5c4" satisfied condition "success or failure" Apr 26 21:43:53.926: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-fee90615-9a18-4e5a-980b-6d615b7dd5c4 container client-container: STEP: delete the pod Apr 26 21:43:54.051: INFO: Waiting for pod downwardapi-volume-fee90615-9a18-4e5a-980b-6d615b7dd5c4 to disappear Apr 26 21:43:54.062: INFO: Pod downwardapi-volume-fee90615-9a18-4e5a-980b-6d615b7dd5c4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:43:54.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6309" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:43:54.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-398/configmap-test-f740bdc4-4414-4fca-80ea-bf55346bb9bd STEP: Creating a pod to test consume configMaps Apr 26 21:43:54.123: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb81abdd-3f28-489b-a72e-896aa4e26f57" in namespace "configmap-398" to be "success or failure" Apr 26 21:43:54.145: INFO: Pod "pod-configmaps-cb81abdd-3f28-489b-a72e-896aa4e26f57": Phase="Pending", Reason="", readiness=false. Elapsed: 22.231542ms Apr 26 21:43:56.188: INFO: Pod "pod-configmaps-cb81abdd-3f28-489b-a72e-896aa4e26f57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064974294s Apr 26 21:43:58.193: INFO: Pod "pod-configmaps-cb81abdd-3f28-489b-a72e-896aa4e26f57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069519956s STEP: Saw pod success Apr 26 21:43:58.193: INFO: Pod "pod-configmaps-cb81abdd-3f28-489b-a72e-896aa4e26f57" satisfied condition "success or failure" Apr 26 21:43:58.196: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-cb81abdd-3f28-489b-a72e-896aa4e26f57 container env-test: STEP: delete the pod Apr 26 21:43:58.254: INFO: Waiting for pod pod-configmaps-cb81abdd-3f28-489b-a72e-896aa4e26f57 to disappear Apr 26 21:43:58.258: INFO: Pod pod-configmaps-cb81abdd-3f28-489b-a72e-896aa4e26f57 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:43:58.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-398" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2014,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:43:58.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a3aeaad8-cbd7-4594-acd6-5025e8ef444f STEP: Creating a pod to test consume secrets Apr 26 21:43:58.311: INFO: Waiting up to 5m0s for pod "pod-secrets-231adefc-f460-45d6-bc78-f42420923931" in namespace "secrets-143" to be "success or failure" Apr 26 21:43:58.315: INFO: Pod "pod-secrets-231adefc-f460-45d6-bc78-f42420923931": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222833ms Apr 26 21:44:00.344: INFO: Pod "pod-secrets-231adefc-f460-45d6-bc78-f42420923931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032958489s Apr 26 21:44:02.362: INFO: Pod "pod-secrets-231adefc-f460-45d6-bc78-f42420923931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051216662s STEP: Saw pod success Apr 26 21:44:02.362: INFO: Pod "pod-secrets-231adefc-f460-45d6-bc78-f42420923931" satisfied condition "success or failure" Apr 26 21:44:02.365: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-231adefc-f460-45d6-bc78-f42420923931 container secret-volume-test: STEP: delete the pod Apr 26 21:44:02.400: INFO: Waiting for pod pod-secrets-231adefc-f460-45d6-bc78-f42420923931 to disappear Apr 26 21:44:02.417: INFO: Pod pod-secrets-231adefc-f460-45d6-bc78-f42420923931 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:44:02.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-143" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2022,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:44:02.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-807 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 26 21:44:02.544: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 26 21:44:26.762: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.121:8080/dial?request=hostname&protocol=udp&host=10.244.1.217&port=8081&tries=1'] Namespace:pod-network-test-807 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 21:44:26.762: INFO: >>> kubeConfig: /root/.kube/config I0426 21:44:26.796645 6 log.go:172] (0xc0025262c0) (0xc002754960) Create stream I0426 21:44:26.796678 6 log.go:172] (0xc0025262c0) (0xc002754960) Stream added, broadcasting: 1 I0426 21:44:26.798707 6 log.go:172] (0xc0025262c0) Reply frame received for 1 I0426 21:44:26.798765 6 log.go:172] (0xc0025262c0) (0xc0027c2000) Create stream I0426 21:44:26.798788 6 log.go:172] (0xc0025262c0) (0xc0027c2000) Stream added, broadcasting: 3 I0426 21:44:26.799759 6 log.go:172] (0xc0025262c0) Reply frame received for 3 I0426 21:44:26.799811 6 log.go:172] (0xc0025262c0) (0xc001e10000) Create stream I0426 21:44:26.799843 6 log.go:172] (0xc0025262c0) (0xc001e10000) Stream added, broadcasting: 5 I0426 21:44:26.800853 6 log.go:172] (0xc0025262c0) Reply frame received for 5 I0426 21:44:26.908487 6 log.go:172] (0xc0025262c0) Data frame received for 3 I0426 21:44:26.908517 6 log.go:172] (0xc0027c2000) (3) Data frame handling I0426 21:44:26.908540 6 log.go:172] (0xc0027c2000) (3) Data frame sent I0426 21:44:26.909443 6 log.go:172] (0xc0025262c0) Data frame received for 5 I0426 21:44:26.909458 6 log.go:172] (0xc001e10000) (5) Data frame handling I0426 21:44:26.909564 6 log.go:172] (0xc0025262c0) Data frame received for 3 I0426 21:44:26.909588 6 log.go:172] (0xc0027c2000) (3) Data frame handling I0426 21:44:26.911233 6 log.go:172] (0xc0025262c0) Data frame received for 1 I0426 21:44:26.911245 6 log.go:172] (0xc002754960) (1) Data frame handling I0426 21:44:26.911251 6 log.go:172] (0xc002754960) (1) Data frame sent I0426 21:44:26.911258 6 log.go:172] (0xc0025262c0) (0xc002754960) Stream removed, broadcasting: 1 I0426 21:44:26.911265 6 log.go:172] (0xc0025262c0) Go away received I0426 21:44:26.911386 6 log.go:172] (0xc0025262c0) (0xc002754960) Stream removed, broadcasting: 1 I0426 21:44:26.911472 6 log.go:172] (0xc0025262c0) (0xc0027c2000) Stream removed, broadcasting: 3 I0426 21:44:26.911511 6 log.go:172] (0xc0025262c0) (0xc001e10000) Stream removed, broadcasting: 5 Apr 26 21:44:26.911: INFO: Waiting for responses: map[] Apr 26 21:44:26.915: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.121:8080/dial?request=hostname&protocol=udp&host=10.244.2.120&port=8081&tries=1'] Namespace:pod-network-test-807 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 21:44:26.915: INFO: >>> kubeConfig: /root/.kube/config I0426 21:44:26.949644 6 log.go:172] (0xc0016c8210) (0xc001e106e0) Create stream I0426 21:44:26.949691 6 log.go:172] (0xc0016c8210) (0xc001e106e0) Stream added, broadcasting: 1 I0426 21:44:26.951734 6 log.go:172] (0xc0016c8210) Reply frame received for 1 I0426 21:44:26.951768 6 log.go:172] (0xc0016c8210) (0xc002754b40) Create stream I0426 21:44:26.951777 6 log.go:172] (0xc0016c8210) (0xc002754b40) Stream added, broadcasting: 3 I0426 21:44:26.952830 6 log.go:172] (0xc0016c8210) Reply frame received for 3 I0426 21:44:26.952897 6 log.go:172] (0xc0016c8210) (0xc001e10820) Create stream I0426 21:44:26.952916 6 log.go:172] (0xc0016c8210) (0xc001e10820) Stream added, broadcasting: 5 I0426 21:44:26.954261 6 log.go:172] (0xc0016c8210) Reply frame received for 5 I0426 21:44:27.022178 6 log.go:172] (0xc0016c8210) Data frame received for 3 I0426 21:44:27.022213 6 log.go:172] (0xc002754b40) (3) Data frame handling I0426 21:44:27.022237 6 log.go:172] (0xc002754b40) (3) Data frame sent I0426 21:44:27.022805 6 log.go:172] (0xc0016c8210) Data frame received for 3 I0426 21:44:27.022836 6 log.go:172] (0xc002754b40) (3) Data frame handling I0426 21:44:27.022955 6 log.go:172] (0xc0016c8210) Data frame received for 5 I0426 21:44:27.022980 6 log.go:172] (0xc001e10820) (5) Data frame handling I0426 21:44:27.024767 6 log.go:172] (0xc0016c8210) Data frame received for 1 I0426 21:44:27.024801 6 log.go:172] (0xc001e106e0) (1) Data frame handling I0426 21:44:27.024822 6 log.go:172] (0xc001e106e0) (1) Data frame sent I0426 21:44:27.024847 6 log.go:172] (0xc0016c8210) (0xc001e106e0) Stream removed, broadcasting: 1 I0426 21:44:27.024894 6 log.go:172] (0xc0016c8210) Go away received I0426 21:44:27.025000 6 log.go:172] (0xc0016c8210) (0xc001e106e0) Stream removed, broadcasting: 1 I0426 21:44:27.025027 6 log.go:172] (0xc0016c8210) (0xc002754b40) Stream removed, broadcasting: 3 I0426 21:44:27.025046 6 log.go:172] (0xc0016c8210) (0xc001e10820) Stream removed, broadcasting: 5 Apr 26 21:44:27.025: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:44:27.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-807" for this suite. • [SLOW TEST:24.607 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2044,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:44:27.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-e656bcd3-495a-46b2-b28e-a99f815dbd87 STEP: Creating a pod to test consume configMaps Apr 26 21:44:27.120: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-636a6ad0-a4bf-4f94-9a8d-3fec980cc5fe" in namespace "projected-266" to be "success or failure" Apr 26 21:44:27.168: INFO: Pod "pod-projected-configmaps-636a6ad0-a4bf-4f94-9a8d-3fec980cc5fe": Phase="Pending", Reason="", readiness=false. Elapsed: 47.794517ms Apr 26 21:44:29.183: INFO: Pod "pod-projected-configmaps-636a6ad0-a4bf-4f94-9a8d-3fec980cc5fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062302051s Apr 26 21:44:31.191: INFO: Pod "pod-projected-configmaps-636a6ad0-a4bf-4f94-9a8d-3fec980cc5fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070632101s STEP: Saw pod success Apr 26 21:44:31.191: INFO: Pod "pod-projected-configmaps-636a6ad0-a4bf-4f94-9a8d-3fec980cc5fe" satisfied condition "success or failure" Apr 26 21:44:31.194: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-636a6ad0-a4bf-4f94-9a8d-3fec980cc5fe container projected-configmap-volume-test: STEP: delete the pod Apr 26 21:44:31.210: INFO: Waiting for pod pod-projected-configmaps-636a6ad0-a4bf-4f94-9a8d-3fec980cc5fe to disappear Apr 26 21:44:31.214: INFO: Pod pod-projected-configmaps-636a6ad0-a4bf-4f94-9a8d-3fec980cc5fe no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:44:31.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-266" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2090,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:44:31.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 26 21:44:31.348: INFO: Waiting up to 5m0s for pod "downward-api-59a54eb1-a33c-49af-ab42-0c5365aaea84" in namespace "downward-api-9301" to be "success or failure" Apr 26 21:44:31.364: INFO: Pod "downward-api-59a54eb1-a33c-49af-ab42-0c5365aaea84": Phase="Pending", Reason="", readiness=false. Elapsed: 16.180471ms Apr 26 21:44:33.404: INFO: Pod "downward-api-59a54eb1-a33c-49af-ab42-0c5365aaea84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056503531s Apr 26 21:44:35.409: INFO: Pod "downward-api-59a54eb1-a33c-49af-ab42-0c5365aaea84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061037677s Apr 26 21:44:37.412: INFO: Pod "downward-api-59a54eb1-a33c-49af-ab42-0c5365aaea84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06393632s STEP: Saw pod success Apr 26 21:44:37.412: INFO: Pod "downward-api-59a54eb1-a33c-49af-ab42-0c5365aaea84" satisfied condition "success or failure" Apr 26 21:44:37.414: INFO: Trying to get logs from node jerma-worker2 pod downward-api-59a54eb1-a33c-49af-ab42-0c5365aaea84 container dapi-container: STEP: delete the pod Apr 26 21:44:37.442: INFO: Waiting for pod downward-api-59a54eb1-a33c-49af-ab42-0c5365aaea84 to disappear Apr 26 21:44:37.458: INFO: Pod downward-api-59a54eb1-a33c-49af-ab42-0c5365aaea84 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:44:37.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9301" for this suite. • [SLOW TEST:6.244 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2100,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:44:37.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 21:44:38.219: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 21:44:40.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534278, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534278, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534278, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723534278, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 21:44:43.303: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:44:43.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9245" for this suite. STEP: Destroying namespace "webhook-9245-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.402 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":129,"skipped":2108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:44:43.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:44:55.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6568" for this suite. • [SLOW TEST:11.224 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":130,"skipped":2140,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:44:55.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Apr 26 21:44:55.169: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 26 21:44:55.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7120' Apr 26 21:44:58.024: INFO: stderr: "" Apr 26 21:44:58.024: INFO: stdout: "service/agnhost-slave created\n" Apr 26 21:44:58.024: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 26 21:44:58.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7120' Apr 26 21:44:58.343: INFO: stderr: "" Apr 26 21:44:58.343: INFO: stdout: "service/agnhost-master created\n" Apr 26 21:44:58.343: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 26 21:44:58.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7120' Apr 26 21:44:58.608: INFO: stderr: "" Apr 26 21:44:58.608: INFO: stdout: "service/frontend created\n" Apr 26 21:44:58.608: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 26 21:44:58.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7120' Apr 26 21:44:58.839: INFO: stderr: "" Apr 26 21:44:58.839: INFO: stdout: "deployment.apps/frontend created\n" Apr 26 21:44:58.839: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 26 21:44:58.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7120' Apr 26 21:44:59.159: INFO: stderr: "" Apr 26 21:44:59.159: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 26 21:44:59.159: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 26 21:44:59.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7120' Apr 26 21:44:59.448: INFO: stderr: "" Apr 26 21:44:59.448: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 26 21:44:59.448: INFO: Waiting for all frontend pods to be Running. Apr 26 21:45:09.498: INFO: Waiting for frontend to serve content. Apr 26 21:45:09.509: INFO: Trying to add a new entry to the guestbook. Apr 26 21:45:09.519: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 26 21:45:09.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7120' Apr 26 21:45:09.703: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 21:45:09.703: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 26 21:45:09.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7120' Apr 26 21:45:09.845: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 21:45:09.845: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 26 21:45:09.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7120' Apr 26 21:45:09.988: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 21:45:09.988: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 26 21:45:09.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7120' Apr 26 21:45:10.093: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 21:45:10.093: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 26 21:45:10.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7120' Apr 26 21:45:10.194: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 21:45:10.194: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 26 21:45:10.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7120' Apr 26 21:45:10.297: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 21:45:10.297: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:45:10.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7120" for this suite. • [SLOW TEST:15.212 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":131,"skipped":2152,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:45:10.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 26 21:45:16.532: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:45:16.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1192" for this suite. • [SLOW TEST:6.267 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2162,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:45:16.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:45:30.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6623" for this suite. • [SLOW TEST:14.070 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":133,"skipped":2169,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:45:30.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-c91f3c8a-2b19-401d-938d-df5e651624d8 STEP: Creating a pod to test consume secrets Apr 26 21:45:30.746: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ba403d02-ed3c-4464-8bee-9708574ab0b6" in namespace "projected-8751" to be "success or failure" Apr 26 21:45:30.773: INFO: Pod "pod-projected-secrets-ba403d02-ed3c-4464-8bee-9708574ab0b6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.307086ms Apr 26 21:45:32.777: INFO: Pod "pod-projected-secrets-ba403d02-ed3c-4464-8bee-9708574ab0b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030743715s Apr 26 21:45:34.780: INFO: Pod "pod-projected-secrets-ba403d02-ed3c-4464-8bee-9708574ab0b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034500113s STEP: Saw pod success Apr 26 21:45:34.780: INFO: Pod "pod-projected-secrets-ba403d02-ed3c-4464-8bee-9708574ab0b6" satisfied condition "success or failure" Apr 26 21:45:34.783: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-ba403d02-ed3c-4464-8bee-9708574ab0b6 container projected-secret-volume-test: STEP: delete the pod Apr 26 21:45:34.806: INFO: Waiting for pod pod-projected-secrets-ba403d02-ed3c-4464-8bee-9708574ab0b6 to disappear Apr 26 21:45:34.811: INFO: Pod pod-projected-secrets-ba403d02-ed3c-4464-8bee-9708574ab0b6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:45:34.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8751" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:45:34.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 26 21:45:34.915: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:45:48.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4692" for this suite. • [SLOW TEST:13.528 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":135,"skipped":2239,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:45:48.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2860 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-2860 Apr 26 21:45:48.462: INFO: Found 0 stateful pods, waiting for 1 Apr 26 21:45:58.467: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 26 21:45:58.504: INFO: Deleting all statefulset in ns statefulset-2860 Apr 26 21:45:58.603: INFO: Scaling statefulset ss to 0 Apr 26 21:46:18.648: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 21:46:18.652: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:46:18.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2860" for this suite. • [SLOW TEST:30.312 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":136,"skipped":2249,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:46:18.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:46:18.755: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 6.877792ms) Apr 26 21:46:18.758: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.384892ms) Apr 26 21:46:18.761: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.771687ms) Apr 26 21:46:18.764: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.929195ms) Apr 26 21:46:18.767: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.850238ms) Apr 26 21:46:18.770: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.943023ms) Apr 26 21:46:18.772: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.482877ms) Apr 26 21:46:18.775: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.600554ms) Apr 26 21:46:18.778: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.943807ms) Apr 26 21:46:18.781: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.033863ms) Apr 26 21:46:18.784: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.767869ms) Apr 26 21:46:18.787: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.279148ms) Apr 26 21:46:18.793: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.449163ms) Apr 26 21:46:18.797: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.536966ms) Apr 26 21:46:18.800: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.857763ms) Apr 26 21:46:18.803: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.420975ms) Apr 26 21:46:18.805: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.665796ms) Apr 26 21:46:18.808: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.155616ms) Apr 26 21:46:18.810: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.606313ms) Apr 26 21:46:18.813: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.416159ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:46:18.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9689" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":137,"skipped":2263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:46:18.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 26 21:46:23.435: INFO: Successfully updated pod "annotationupdatec4382b26-d245-4016-8136-ab41ed534efd" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:46:25.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1513" for this suite. • [SLOW TEST:6.663 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:46:25.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:46:25.646: INFO: Create a RollingUpdate DaemonSet Apr 26 21:46:25.650: INFO: Check that daemon pods launch on every node of the cluster Apr 26 21:46:25.668: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:46:25.672: INFO: Number of nodes with available pods: 0 Apr 26 21:46:25.672: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:46:26.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:46:26.680: INFO: Number of nodes with available pods: 0 Apr 26 21:46:26.680: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:46:27.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:46:27.739: INFO: Number of nodes with available pods: 0 Apr 26 21:46:27.739: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:46:28.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:46:28.681: INFO: Number of nodes with available pods: 0 Apr 26 21:46:28.681: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:46:29.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:46:29.682: INFO: Number of nodes with available pods: 1 Apr 26 21:46:29.682: INFO: Node jerma-worker2 is running more than one daemon pod Apr 26 21:46:30.694: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:46:30.697: INFO: Number of nodes with available pods: 2 Apr 26 21:46:30.697: INFO: Number of running nodes: 2, number of available pods: 2 Apr 26 21:46:30.697: INFO: Update the DaemonSet to trigger a rollout Apr 26 21:46:30.702: INFO: Updating DaemonSet daemon-set Apr 26 21:46:39.761: INFO: Roll back the DaemonSet before rollout is complete Apr 26 21:46:39.766: INFO: Updating DaemonSet daemon-set Apr 26 21:46:39.766: INFO: Make sure DaemonSet rollback is complete Apr 26 21:46:39.831: INFO: Wrong image for pod: daemon-set-dksdd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 26 21:46:39.831: INFO: Pod daemon-set-dksdd is not available Apr 26 21:46:39.835: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:46:40.839: INFO: Wrong image for pod: daemon-set-dksdd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 26 21:46:40.839: INFO: Pod daemon-set-dksdd is not available Apr 26 21:46:40.844: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:46:41.859: INFO: Wrong image for pod: daemon-set-dksdd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 26 21:46:41.859: INFO: Pod daemon-set-dksdd is not available Apr 26 21:46:41.862: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:46:42.839: INFO: Pod daemon-set-r494v is not available Apr 26 21:46:42.843: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5390, will wait for the garbage collector to delete the pods Apr 26 21:46:42.908: INFO: Deleting DaemonSet.extensions daemon-set took: 5.971718ms Apr 26 21:46:43.211: INFO: Terminating DaemonSet.extensions daemon-set pods took: 303.142972ms Apr 26 21:46:46.114: INFO: Number of nodes with available pods: 0 Apr 26 21:46:46.114: INFO: Number of running nodes: 0, number of available pods: 0 Apr 26 21:46:46.117: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5390/daemonsets","resourceVersion":"11288464"},"items":null} Apr 26 21:46:46.120: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5390/pods","resourceVersion":"11288464"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:46:46.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5390" for this suite. • [SLOW TEST:20.654 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":139,"skipped":2350,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:46:46.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0426 21:46:56.234860 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 21:46:56.234: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:46:56.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9025" for this suite. • [SLOW TEST:10.102 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":140,"skipped":2351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:46:56.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-f56085b4-8ad7-4c5b-b12d-fd78febeb044 STEP: Creating a pod to test consume configMaps Apr 26 21:46:56.389: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e87d27f0-e974-46da-90c1-bee9a33cf71c" in namespace "projected-3510" to be "success or failure" Apr 26 21:46:56.398: INFO: Pod "pod-projected-configmaps-e87d27f0-e974-46da-90c1-bee9a33cf71c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.409008ms Apr 26 21:46:58.429: INFO: Pod "pod-projected-configmaps-e87d27f0-e974-46da-90c1-bee9a33cf71c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040294272s Apr 26 21:47:00.442: INFO: Pod "pod-projected-configmaps-e87d27f0-e974-46da-90c1-bee9a33cf71c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053447174s STEP: Saw pod success Apr 26 21:47:00.442: INFO: Pod "pod-projected-configmaps-e87d27f0-e974-46da-90c1-bee9a33cf71c" satisfied condition "success or failure" Apr 26 21:47:00.446: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e87d27f0-e974-46da-90c1-bee9a33cf71c container projected-configmap-volume-test: STEP: delete the pod Apr 26 21:47:00.495: INFO: Waiting for pod pod-projected-configmaps-e87d27f0-e974-46da-90c1-bee9a33cf71c to disappear Apr 26 21:47:00.562: INFO: Pod pod-projected-configmaps-e87d27f0-e974-46da-90c1-bee9a33cf71c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:47:00.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3510" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2374,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:47:00.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 26 21:47:00.651: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 26 21:47:05.655: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:47:05.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2950" for this suite. • [SLOW TEST:5.211 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":142,"skipped":2378,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:47:05.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 26 21:47:10.922: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:47:10.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5732" for this suite. • [SLOW TEST:5.210 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2380,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:47:10.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:47:11.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 26 21:47:11.647: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-26T21:47:11Z generation:1 name:name1 resourceVersion:11288718 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f4197348-e4a0-4edf-b8b4-8d7fd2be12ea] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 26 21:47:21.652: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-26T21:47:21Z generation:1 name:name2 resourceVersion:11288758 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:06824e4e-b291-491e-80d0-6477a4cbba9e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 26 21:47:31.658: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-26T21:47:11Z generation:2 name:name1 resourceVersion:11288788 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f4197348-e4a0-4edf-b8b4-8d7fd2be12ea] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 26 21:47:41.664: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-26T21:47:21Z generation:2 name:name2 resourceVersion:11288818 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:06824e4e-b291-491e-80d0-6477a4cbba9e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 26 21:47:51.671: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-26T21:47:11Z generation:2 name:name1 resourceVersion:11288848 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f4197348-e4a0-4edf-b8b4-8d7fd2be12ea] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 26 21:48:01.679: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-26T21:47:21Z generation:2 name:name2 resourceVersion:11288878 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:06824e4e-b291-491e-80d0-6477a4cbba9e] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:48:12.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-140" for this suite. • [SLOW TEST:61.207 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":144,"skipped":2396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:48:12.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:48:12.302: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed4d5858-b6bb-4563-82a9-61b06b0a3233" in namespace "downward-api-1688" to be "success or failure" Apr 26 21:48:12.310: INFO: Pod "downwardapi-volume-ed4d5858-b6bb-4563-82a9-61b06b0a3233": Phase="Pending", Reason="", readiness=false. Elapsed: 7.469324ms Apr 26 21:48:14.314: INFO: Pod "downwardapi-volume-ed4d5858-b6bb-4563-82a9-61b06b0a3233": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011307369s Apr 26 21:48:16.317: INFO: Pod "downwardapi-volume-ed4d5858-b6bb-4563-82a9-61b06b0a3233": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015022926s STEP: Saw pod success Apr 26 21:48:16.318: INFO: Pod "downwardapi-volume-ed4d5858-b6bb-4563-82a9-61b06b0a3233" satisfied condition "success or failure" Apr 26 21:48:16.319: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ed4d5858-b6bb-4563-82a9-61b06b0a3233 container client-container: STEP: delete the pod Apr 26 21:48:16.442: INFO: Waiting for pod downwardapi-volume-ed4d5858-b6bb-4563-82a9-61b06b0a3233 to disappear Apr 26 21:48:16.506: INFO: Pod downwardapi-volume-ed4d5858-b6bb-4563-82a9-61b06b0a3233 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:48:16.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1688" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2437,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:48:16.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-552032e6-9f44-4cd7-9599-71d34d449435 STEP: Creating a pod to test consume secrets Apr 26 21:48:16.683: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e7613d0-f4a6-4c04-b7a4-265b3f8a0566" in namespace "projected-8440" to be "success or failure" Apr 26 21:48:16.687: INFO: Pod "pod-projected-secrets-7e7613d0-f4a6-4c04-b7a4-265b3f8a0566": Phase="Pending", Reason="", readiness=false. Elapsed: 3.616326ms Apr 26 21:48:18.692: INFO: Pod "pod-projected-secrets-7e7613d0-f4a6-4c04-b7a4-265b3f8a0566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00805343s Apr 26 21:48:20.696: INFO: Pod "pod-projected-secrets-7e7613d0-f4a6-4c04-b7a4-265b3f8a0566": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012073204s STEP: Saw pod success Apr 26 21:48:20.696: INFO: Pod "pod-projected-secrets-7e7613d0-f4a6-4c04-b7a4-265b3f8a0566" satisfied condition "success or failure" Apr 26 21:48:20.698: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-7e7613d0-f4a6-4c04-b7a4-265b3f8a0566 container projected-secret-volume-test: STEP: delete the pod Apr 26 21:48:20.811: INFO: Waiting for pod pod-projected-secrets-7e7613d0-f4a6-4c04-b7a4-265b3f8a0566 to disappear Apr 26 21:48:20.831: INFO: Pod pod-projected-secrets-7e7613d0-f4a6-4c04-b7a4-265b3f8a0566 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:48:20.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8440" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2438,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:48:20.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5841 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 26 21:48:20.873: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 26 21:48:45.074: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.232:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5841 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 21:48:45.074: INFO: >>> kubeConfig: /root/.kube/config I0426 21:48:45.109812 6 log.go:172] (0xc001d46630) (0xc00226aaa0) Create stream I0426 21:48:45.109848 6 log.go:172] (0xc001d46630) (0xc00226aaa0) Stream added, broadcasting: 1 I0426 21:48:45.112209 6 log.go:172] (0xc001d46630) Reply frame received for 1 I0426 21:48:45.112259 6 log.go:172] (0xc001d46630) (0xc00226abe0) Create stream I0426 21:48:45.112277 6 log.go:172] (0xc001d46630) (0xc00226abe0) Stream added, broadcasting: 3 I0426 21:48:45.113415 6 log.go:172] (0xc001d46630) Reply frame received for 3 I0426 21:48:45.113465 6 log.go:172] (0xc001d46630) (0xc0019640a0) Create stream I0426 21:48:45.113486 6 log.go:172] (0xc001d46630) (0xc0019640a0) Stream added, broadcasting: 5 I0426 21:48:45.114450 6 log.go:172] (0xc001d46630) Reply frame received for 5 I0426 21:48:45.181077 6 log.go:172] (0xc001d46630) Data frame received for 3 I0426 21:48:45.181108 6 log.go:172] (0xc00226abe0) (3) Data frame handling I0426 21:48:45.181298 6 log.go:172] (0xc00226abe0) (3) Data frame sent I0426 21:48:45.181329 6 log.go:172] (0xc001d46630) Data frame received for 3 I0426 21:48:45.181359 6 log.go:172] (0xc00226abe0) (3) Data frame handling I0426 21:48:45.181394 6 log.go:172] (0xc001d46630) Data frame received for 5 I0426 21:48:45.181429 6 log.go:172] (0xc0019640a0) (5) Data frame handling I0426 21:48:45.183205 6 log.go:172] (0xc001d46630) Data frame received for 1 I0426 21:48:45.183243 6 log.go:172] (0xc00226aaa0) (1) Data frame handling I0426 21:48:45.183262 6 log.go:172] (0xc00226aaa0) (1) Data frame sent I0426 21:48:45.183295 6 log.go:172] (0xc001d46630) (0xc00226aaa0) Stream removed, broadcasting: 1 I0426 21:48:45.183315 6 log.go:172] (0xc001d46630) Go away received I0426 21:48:45.183372 6 log.go:172] (0xc001d46630) (0xc00226aaa0) Stream removed, broadcasting: 1 I0426 21:48:45.183386 6 log.go:172] (0xc001d46630) (0xc00226abe0) Stream removed, broadcasting: 3 I0426 21:48:45.183393 6 log.go:172] (0xc001d46630) (0xc0019640a0) Stream removed, broadcasting: 5 Apr 26 21:48:45.183: INFO: Found all expected endpoints: [netserver-0] Apr 26 21:48:45.187: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.140:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5841 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 21:48:45.187: INFO: >>> kubeConfig: /root/.kube/config I0426 21:48:45.224049 6 log.go:172] (0xc0016c8370) (0xc0027c3900) Create stream I0426 21:48:45.224083 6 log.go:172] (0xc0016c8370) (0xc0027c3900) Stream added, broadcasting: 1 I0426 21:48:45.226719 6 log.go:172] (0xc0016c8370) Reply frame received for 1 I0426 21:48:45.226781 6 log.go:172] (0xc0016c8370) (0xc0027c3ae0) Create stream I0426 21:48:45.226809 6 log.go:172] (0xc0016c8370) (0xc0027c3ae0) Stream added, broadcasting: 3 I0426 21:48:45.227707 6 log.go:172] (0xc0016c8370) Reply frame received for 3 I0426 21:48:45.227737 6 log.go:172] (0xc0016c8370) (0xc001d08000) Create stream I0426 21:48:45.227757 6 log.go:172] (0xc0016c8370) (0xc001d08000) Stream added, broadcasting: 5 I0426 21:48:45.228607 6 log.go:172] (0xc0016c8370) Reply frame received for 5 I0426 21:48:45.291266 6 log.go:172] (0xc0016c8370) Data frame received for 3 I0426 21:48:45.291304 6 log.go:172] (0xc0027c3ae0) (3) Data frame handling I0426 21:48:45.291316 6 log.go:172] (0xc0027c3ae0) (3) Data frame sent I0426 21:48:45.291336 6 log.go:172] (0xc0016c8370) Data frame received for 3 I0426 21:48:45.291348 6 log.go:172] (0xc0027c3ae0) (3) Data frame handling I0426 21:48:45.291378 6 log.go:172] (0xc0016c8370) Data frame received for 5 I0426 21:48:45.291392 6 log.go:172] (0xc001d08000) (5) Data frame handling I0426 21:48:45.292756 6 log.go:172] (0xc0016c8370) Data frame received for 1 I0426 21:48:45.292777 6 log.go:172] (0xc0027c3900) (1) Data frame handling I0426 21:48:45.292789 6 log.go:172] (0xc0027c3900) (1) Data frame sent I0426 21:48:45.292812 6 log.go:172] (0xc0016c8370) (0xc0027c3900) Stream removed, broadcasting: 1 I0426 21:48:45.292914 6 log.go:172] (0xc0016c8370) Go away received I0426 21:48:45.292948 6 log.go:172] (0xc0016c8370) (0xc0027c3900) Stream removed, broadcasting: 1 I0426 21:48:45.292980 6 log.go:172] (0xc0016c8370) (0xc0027c3ae0) Stream removed, broadcasting: 3 I0426 21:48:45.292998 6 log.go:172] (0xc0016c8370) (0xc001d08000) Stream removed, broadcasting: 5 Apr 26 21:48:45.293: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:48:45.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5841" for this suite. • [SLOW TEST:24.462 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2438,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:48:45.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 26 21:48:45.399: INFO: Created pod &Pod{ObjectMeta:{dns-1963 dns-1963 /api/v1/namespaces/dns-1963/pods/dns-1963 d9b27df3-efb3-4735-873c-289823287bf3 11289104 0 2020-04-26 21:48:45 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45zxg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45zxg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45zxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Apr 26 21:48:49.410: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1963 PodName:dns-1963 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 21:48:49.410: INFO: >>> kubeConfig: /root/.kube/config I0426 21:48:49.449732 6 log.go:172] (0xc0029fa420) (0xc0019645a0) Create stream I0426 21:48:49.449768 6 log.go:172] (0xc0029fa420) (0xc0019645a0) Stream added, broadcasting: 1 I0426 21:48:49.452209 6 log.go:172] (0xc0029fa420) Reply frame received for 1 I0426 21:48:49.452243 6 log.go:172] (0xc0029fa420) (0xc001964640) Create stream I0426 21:48:49.452253 6 log.go:172] (0xc0029fa420) (0xc001964640) Stream added, broadcasting: 3 I0426 21:48:49.452967 6 log.go:172] (0xc0029fa420) Reply frame received for 3 I0426 21:48:49.452995 6 log.go:172] (0xc0029fa420) (0xc001964fa0) Create stream I0426 21:48:49.453005 6 log.go:172] (0xc0029fa420) (0xc001964fa0) Stream added, broadcasting: 5 I0426 21:48:49.453795 6 log.go:172] (0xc0029fa420) Reply frame received for 5 I0426 21:48:49.540010 6 log.go:172] (0xc0029fa420) Data frame received for 3 I0426 21:48:49.540049 6 log.go:172] (0xc001964640) (3) Data frame handling I0426 21:48:49.540070 6 log.go:172] (0xc001964640) (3) Data frame sent I0426 21:48:49.540911 6 log.go:172] (0xc0029fa420) Data frame received for 5 I0426 21:48:49.540935 6 log.go:172] (0xc001964fa0) (5) Data frame handling I0426 21:48:49.540968 6 log.go:172] (0xc0029fa420) Data frame received for 3 I0426 21:48:49.540982 6 log.go:172] (0xc001964640) (3) Data frame handling I0426 21:48:49.543089 6 log.go:172] (0xc0029fa420) Data frame received for 1 I0426 21:48:49.543107 6 log.go:172] (0xc0019645a0) (1) Data frame handling I0426 21:48:49.543123 6 log.go:172] (0xc0019645a0) (1) Data frame sent I0426 21:48:49.543147 6 log.go:172] (0xc0029fa420) (0xc0019645a0) Stream removed, broadcasting: 1 I0426 21:48:49.543238 6 log.go:172] (0xc0029fa420) Go away received I0426 21:48:49.543306 6 log.go:172] (0xc0029fa420) (0xc0019645a0) Stream removed, broadcasting: 1 I0426 21:48:49.543334 6 log.go:172] (0xc0029fa420) (0xc001964640) Stream removed, broadcasting: 3 I0426 21:48:49.543344 6 log.go:172] (0xc0029fa420) (0xc001964fa0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 26 21:48:49.543: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1963 PodName:dns-1963 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 21:48:49.543: INFO: >>> kubeConfig: /root/.kube/config I0426 21:48:49.572901 6 log.go:172] (0xc0029fa9a0) (0xc0019652c0) Create stream I0426 21:48:49.572933 6 log.go:172] (0xc0029fa9a0) (0xc0019652c0) Stream added, broadcasting: 1 I0426 21:48:49.575595 6 log.go:172] (0xc0029fa9a0) Reply frame received for 1 I0426 21:48:49.575643 6 log.go:172] (0xc0029fa9a0) (0xc001d092c0) Create stream I0426 21:48:49.575665 6 log.go:172] (0xc0029fa9a0) (0xc001d092c0) Stream added, broadcasting: 3 I0426 21:48:49.576759 6 log.go:172] (0xc0029fa9a0) Reply frame received for 3 I0426 21:48:49.576803 6 log.go:172] (0xc0029fa9a0) (0xc0027c3b80) Create stream I0426 21:48:49.576819 6 log.go:172] (0xc0029fa9a0) (0xc0027c3b80) Stream added, broadcasting: 5 I0426 21:48:49.578144 6 log.go:172] (0xc0029fa9a0) Reply frame received for 5 I0426 21:48:49.640682 6 log.go:172] (0xc0029fa9a0) Data frame received for 3 I0426 21:48:49.640713 6 log.go:172] (0xc001d092c0) (3) Data frame handling I0426 21:48:49.640735 6 log.go:172] (0xc001d092c0) (3) Data frame sent I0426 21:48:49.641673 6 log.go:172] (0xc0029fa9a0) Data frame received for 5 I0426 21:48:49.641710 6 log.go:172] (0xc0027c3b80) (5) Data frame handling I0426 21:48:49.641748 6 log.go:172] (0xc0029fa9a0) Data frame received for 3 I0426 21:48:49.641777 6 log.go:172] (0xc001d092c0) (3) Data frame handling I0426 21:48:49.643325 6 log.go:172] (0xc0029fa9a0) Data frame received for 1 I0426 21:48:49.643351 6 log.go:172] (0xc0019652c0) (1) Data frame handling I0426 21:48:49.643371 6 log.go:172] (0xc0019652c0) (1) Data frame sent I0426 21:48:49.643397 6 log.go:172] (0xc0029fa9a0) (0xc0019652c0) Stream removed, broadcasting: 1 I0426 21:48:49.643443 6 log.go:172] (0xc0029fa9a0) Go away received I0426 21:48:49.643513 6 log.go:172] (0xc0029fa9a0) (0xc0019652c0) Stream removed, broadcasting: 1 I0426 21:48:49.643616 6 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc001d092c0), 0x5:(*spdystream.Stream)(0xc0027c3b80)} I0426 21:48:49.643675 6 log.go:172] (0xc0029fa9a0) (0xc001d092c0) Stream removed, broadcasting: 3 I0426 21:48:49.643713 6 log.go:172] (0xc0029fa9a0) (0xc0027c3b80) Stream removed, broadcasting: 5 Apr 26 21:48:49.643: INFO: Deleting pod dns-1963... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:48:49.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1963" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":148,"skipped":2455,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:48:49.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 26 21:48:54.627: INFO: Successfully updated pod "labelsupdated72f7913-a778-45c6-ada9-901f01c8c8bd" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:48:58.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8293" for this suite. • [SLOW TEST:9.003 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2489,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:48:58.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 26 21:48:58.792: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 26 21:48:58.803: INFO: Waiting for terminating namespaces to be deleted... Apr 26 21:48:58.806: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 26 21:48:58.810: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:48:58.810: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 21:48:58.810: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:48:58.810: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 21:48:58.810: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 26 21:48:58.815: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 26 21:48:58.816: INFO: Container kube-hunter ready: false, restart count 0 Apr 26 21:48:58.816: INFO: labelsupdated72f7913-a778-45c6-ada9-901f01c8c8bd from projected-8293 started at 2020-04-26 21:48:50 +0000 UTC (1 container statuses recorded) Apr 26 21:48:58.816: INFO: Container client-container ready: true, restart count 0 Apr 26 21:48:58.816: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:48:58.816: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 21:48:58.816: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 26 21:48:58.816: INFO: Container kube-bench ready: false, restart count 0 Apr 26 21:48:58.816: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 21:48:58.816: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Apr 26 21:48:58.893: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Apr 26 21:48:58.893: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Apr 26 21:48:58.893: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Apr 26 21:48:58.893: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 Apr 26 21:48:58.893: INFO: Pod labelsupdated72f7913-a778-45c6-ada9-901f01c8c8bd requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Apr 26 21:48:58.893: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Apr 26 21:48:58.919: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-7150696b-bd29-447c-8263-3cdf8b5479c1.16097cf5d80a60a2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-636/filler-pod-7150696b-bd29-447c-8263-3cdf8b5479c1 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7150696b-bd29-447c-8263-3cdf8b5479c1.16097cf658dacad1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7150696b-bd29-447c-8263-3cdf8b5479c1.16097cf68da596b7], Reason = [Created], Message = [Created container filler-pod-7150696b-bd29-447c-8263-3cdf8b5479c1] STEP: Considering event: Type = [Normal], Name = [filler-pod-7150696b-bd29-447c-8263-3cdf8b5479c1.16097cf69c812b30], Reason = [Started], Message = [Started container filler-pod-7150696b-bd29-447c-8263-3cdf8b5479c1] STEP: Considering event: Type = [Normal], Name = [filler-pod-9ceedb5c-5622-42eb-ae17-4dd75b6e707b.16097cf5d6574c92], Reason = [Scheduled], Message = [Successfully assigned sched-pred-636/filler-pod-9ceedb5c-5622-42eb-ae17-4dd75b6e707b to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-9ceedb5c-5622-42eb-ae17-4dd75b6e707b.16097cf6202aefaf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9ceedb5c-5622-42eb-ae17-4dd75b6e707b.16097cf6732f3b34], Reason = [Created], Message = [Created container filler-pod-9ceedb5c-5622-42eb-ae17-4dd75b6e707b] STEP: Considering event: Type = [Normal], Name = [filler-pod-9ceedb5c-5622-42eb-ae17-4dd75b6e707b.16097cf68a01b543], Reason = [Started], Message = [Started container filler-pod-9ceedb5c-5622-42eb-ae17-4dd75b6e707b] STEP: Considering event: Type = [Warning], Name = [additional-pod.16097cf74065db46], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:49:06.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-636" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.371 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":150,"skipped":2496,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:49:06.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 26 21:49:06.158: INFO: Waiting up to 5m0s for pod "pod-0a5e6797-d6b4-4d43-af3b-f99ebf8ded23" in namespace "emptydir-2972" to be "success or failure" Apr 26 21:49:06.162: INFO: Pod "pod-0a5e6797-d6b4-4d43-af3b-f99ebf8ded23": Phase="Pending", Reason="", readiness=false. Elapsed: 3.322238ms Apr 26 21:49:08.166: INFO: Pod "pod-0a5e6797-d6b4-4d43-af3b-f99ebf8ded23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007341823s Apr 26 21:49:10.169: INFO: Pod "pod-0a5e6797-d6b4-4d43-af3b-f99ebf8ded23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010963963s STEP: Saw pod success Apr 26 21:49:10.169: INFO: Pod "pod-0a5e6797-d6b4-4d43-af3b-f99ebf8ded23" satisfied condition "success or failure" Apr 26 21:49:10.172: INFO: Trying to get logs from node jerma-worker pod pod-0a5e6797-d6b4-4d43-af3b-f99ebf8ded23 container test-container: STEP: delete the pod Apr 26 21:49:10.187: INFO: Waiting for pod pod-0a5e6797-d6b4-4d43-af3b-f99ebf8ded23 to disappear Apr 26 21:49:10.207: INFO: Pod pod-0a5e6797-d6b4-4d43-af3b-f99ebf8ded23 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:49:10.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2972" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2497,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:49:10.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-24bb7316-080e-429e-83ce-ea9eaa0d6459 in namespace container-probe-1282 Apr 26 21:49:14.309: INFO: Started pod test-webserver-24bb7316-080e-429e-83ce-ea9eaa0d6459 in namespace container-probe-1282 STEP: checking the pod's current state and verifying that restartCount is present Apr 26 21:49:14.312: INFO: Initial restart count of pod test-webserver-24bb7316-080e-429e-83ce-ea9eaa0d6459 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:53:14.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1282" for this suite. • [SLOW TEST:244.761 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2501,"failed":0} [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:53:14.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:53:15.041: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 26 21:53:17.074: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:53:18.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2303" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":153,"skipped":2501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:53:18.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:53:35.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1464" for this suite. • [SLOW TEST:17.333 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":154,"skipped":2531,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:53:35.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5322 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 26 21:53:35.486: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 26 21:53:59.831: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.238:8080/dial?request=hostname&protocol=http&host=10.244.1.237&port=8080&tries=1'] Namespace:pod-network-test-5322 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 21:53:59.831: INFO: >>> kubeConfig: /root/.kube/config I0426 21:53:59.864704 6 log.go:172] (0xc001d46210) (0xc001696960) Create stream I0426 21:53:59.864736 6 log.go:172] (0xc001d46210) (0xc001696960) Stream added, broadcasting: 1 I0426 21:53:59.866797 6 log.go:172] (0xc001d46210) Reply frame received for 1 I0426 21:53:59.866849 6 log.go:172] (0xc001d46210) (0xc001696b40) Create stream I0426 21:53:59.866861 6 log.go:172] (0xc001d46210) (0xc001696b40) Stream added, broadcasting: 3 I0426 21:53:59.867703 6 log.go:172] (0xc001d46210) Reply frame received for 3 I0426 21:53:59.867745 6 log.go:172] (0xc001d46210) (0xc001696d20) Create stream I0426 21:53:59.867759 6 log.go:172] (0xc001d46210) (0xc001696d20) Stream added, broadcasting: 5 I0426 21:53:59.868533 6 log.go:172] (0xc001d46210) Reply frame received for 5 I0426 21:53:59.964268 6 log.go:172] (0xc001d46210) Data frame received for 3 I0426 21:53:59.964299 6 log.go:172] (0xc001696b40) (3) Data frame handling I0426 21:53:59.964329 6 log.go:172] (0xc001696b40) (3) Data frame sent I0426 21:53:59.965030 6 log.go:172] (0xc001d46210) Data frame received for 3 I0426 21:53:59.965063 6 log.go:172] (0xc001696b40) (3) Data frame handling I0426 21:53:59.965095 6 log.go:172] (0xc001d46210) Data frame received for 5 I0426 21:53:59.965264 6 log.go:172] (0xc001696d20) (5) Data frame handling I0426 21:53:59.967180 6 log.go:172] (0xc001d46210) Data frame received for 1 I0426 21:53:59.967213 6 log.go:172] (0xc001696960) (1) Data frame handling I0426 21:53:59.967232 6 log.go:172] (0xc001696960) (1) Data frame sent I0426 21:53:59.967249 6 log.go:172] (0xc001d46210) (0xc001696960) Stream removed, broadcasting: 1 I0426 21:53:59.967330 6 log.go:172] (0xc001d46210) (0xc001696960) Stream removed, broadcasting: 1 I0426 21:53:59.967356 6 log.go:172] (0xc001d46210) (0xc001696b40) Stream removed, broadcasting: 3 I0426 21:53:59.967379 6 log.go:172] (0xc001d46210) (0xc001696d20) Stream removed, broadcasting: 5 Apr 26 21:53:59.967: INFO: Waiting for responses: map[] I0426 21:53:59.967469 6 log.go:172] (0xc001d46210) Go away received Apr 26 21:53:59.971: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.238:8080/dial?request=hostname&protocol=http&host=10.244.2.146&port=8080&tries=1'] Namespace:pod-network-test-5322 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 21:53:59.971: INFO: >>> kubeConfig: /root/.kube/config I0426 21:54:00.007345 6 log.go:172] (0xc0029b73f0) (0xc001a554a0) Create stream I0426 21:54:00.007374 6 log.go:172] (0xc0029b73f0) (0xc001a554a0) Stream added, broadcasting: 1 I0426 21:54:00.009874 6 log.go:172] (0xc0029b73f0) Reply frame received for 1 I0426 21:54:00.009922 6 log.go:172] (0xc0029b73f0) (0xc001a55540) Create stream I0426 21:54:00.009937 6 log.go:172] (0xc0029b73f0) (0xc001a55540) Stream added, broadcasting: 3 I0426 21:54:00.011222 6 log.go:172] (0xc0029b73f0) Reply frame received for 3 I0426 21:54:00.011276 6 log.go:172] (0xc0029b73f0) (0xc001c9e1e0) Create stream I0426 21:54:00.011294 6 log.go:172] (0xc0029b73f0) (0xc001c9e1e0) Stream added, broadcasting: 5 I0426 21:54:00.012596 6 log.go:172] (0xc0029b73f0) Reply frame received for 5 I0426 21:54:00.081786 6 log.go:172] (0xc0029b73f0) Data frame received for 3 I0426 21:54:00.081838 6 log.go:172] (0xc001a55540) (3) Data frame handling I0426 21:54:00.081884 6 log.go:172] (0xc001a55540) (3) Data frame sent I0426 21:54:00.082719 6 log.go:172] (0xc0029b73f0) Data frame received for 3 I0426 21:54:00.082760 6 log.go:172] (0xc0029b73f0) Data frame received for 5 I0426 21:54:00.082819 6 log.go:172] (0xc001c9e1e0) (5) Data frame handling I0426 21:54:00.082867 6 log.go:172] (0xc001a55540) (3) Data frame handling I0426 21:54:00.084641 6 log.go:172] (0xc0029b73f0) Data frame received for 1 I0426 21:54:00.084658 6 log.go:172] (0xc001a554a0) (1) Data frame handling I0426 21:54:00.084672 6 log.go:172] (0xc001a554a0) (1) Data frame sent I0426 21:54:00.084690 6 log.go:172] (0xc0029b73f0) (0xc001a554a0) Stream removed, broadcasting: 1 I0426 21:54:00.084705 6 log.go:172] (0xc0029b73f0) Go away received I0426 21:54:00.084849 6 log.go:172] (0xc0029b73f0) (0xc001a554a0) Stream removed, broadcasting: 1 I0426 21:54:00.084869 6 log.go:172] (0xc0029b73f0) (0xc001a55540) Stream removed, broadcasting: 3 I0426 21:54:00.084880 6 log.go:172] (0xc0029b73f0) (0xc001c9e1e0) Stream removed, broadcasting: 5 Apr 26 21:54:00.084: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:54:00.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5322" for this suite. • [SLOW TEST:24.655 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2532,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:54:00.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:54:11.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7779" for this suite. • [SLOW TEST:11.136 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":156,"skipped":2545,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:54:11.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Apr 26 21:54:11.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 26 21:54:11.544: INFO: stderr: "" Apr 26 21:54:11.544: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:54:11.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1257" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":157,"skipped":2553,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:54:11.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 26 21:54:11.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:11.734: INFO: Number of nodes with available pods: 0 Apr 26 21:54:11.734: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:54:12.739: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:12.743: INFO: Number of nodes with available pods: 0 Apr 26 21:54:12.743: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:54:13.851: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:13.854: INFO: Number of nodes with available pods: 0 Apr 26 21:54:13.854: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:54:14.744: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:14.747: INFO: Number of nodes with available pods: 0 Apr 26 21:54:14.747: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:54:15.738: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:15.742: INFO: Number of nodes with available pods: 0 Apr 26 21:54:15.742: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:54:16.791: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:16.794: INFO: Number of nodes with available pods: 2 Apr 26 21:54:16.794: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 26 21:54:16.808: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:16.814: INFO: Number of nodes with available pods: 1 Apr 26 21:54:16.814: INFO: Node jerma-worker2 is running more than one daemon pod Apr 26 21:54:17.851: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:17.854: INFO: Number of nodes with available pods: 1 Apr 26 21:54:17.854: INFO: Node jerma-worker2 is running more than one daemon pod Apr 26 21:54:18.819: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:18.822: INFO: Number of nodes with available pods: 1 Apr 26 21:54:18.822: INFO: Node jerma-worker2 is running more than one daemon pod Apr 26 21:54:19.818: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:19.822: INFO: Number of nodes with available pods: 1 Apr 26 21:54:19.822: INFO: Node jerma-worker2 is running more than one daemon pod Apr 26 21:54:20.819: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 26 21:54:20.823: INFO: Number of nodes with available pods: 2 Apr 26 21:54:20.823: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4613, will wait for the garbage collector to delete the pods Apr 26 21:54:20.889: INFO: Deleting DaemonSet.extensions daemon-set took: 6.969387ms Apr 26 21:54:21.189: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.227777ms Apr 26 21:54:29.593: INFO: Number of nodes with available pods: 0 Apr 26 21:54:29.593: INFO: Number of running nodes: 0, number of available pods: 0 Apr 26 21:54:29.596: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4613/daemonsets","resourceVersion":"11290502"},"items":null} Apr 26 21:54:29.599: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4613/pods","resourceVersion":"11290502"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:54:29.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4613" for this suite. • [SLOW TEST:18.063 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":158,"skipped":2555,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:54:29.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 26 21:54:29.711: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6966 /api/v1/namespaces/watch-6966/configmaps/e2e-watch-test-watch-closed 9115d816-91d3-4628-a3cb-ce6ccb261d91 11290508 0 2020-04-26 21:54:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 26 21:54:29.711: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6966 /api/v1/namespaces/watch-6966/configmaps/e2e-watch-test-watch-closed 9115d816-91d3-4628-a3cb-ce6ccb261d91 11290509 0 2020-04-26 21:54:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 26 21:54:29.722: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6966 /api/v1/namespaces/watch-6966/configmaps/e2e-watch-test-watch-closed 9115d816-91d3-4628-a3cb-ce6ccb261d91 11290510 0 2020-04-26 21:54:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 26 21:54:29.722: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6966 /api/v1/namespaces/watch-6966/configmaps/e2e-watch-test-watch-closed 9115d816-91d3-4628-a3cb-ce6ccb261d91 11290511 0 2020-04-26 21:54:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:54:29.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6966" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":159,"skipped":2564,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:54:29.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6979.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6979.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 21:54:35.865: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:35.869: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:35.872: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:35.876: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:35.886: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:35.889: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:35.892: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:35.896: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:35.903: INFO: Lookups using dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] Apr 26 21:54:40.908: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:40.911: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:40.914: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:40.916: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:40.924: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:40.927: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:40.929: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:40.932: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:40.938: INFO: Lookups using dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] Apr 26 21:54:45.907: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:45.911: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:45.914: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:45.916: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:45.923: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:45.925: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:45.927: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:45.929: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:45.934: INFO: Lookups using dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] Apr 26 21:54:50.907: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:50.911: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:50.915: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:50.918: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:50.926: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:50.928: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:50.931: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:50.934: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:50.941: INFO: Lookups using dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] Apr 26 21:54:55.908: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:55.912: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:55.915: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:55.919: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:55.929: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:55.932: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:55.935: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:55.938: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:54:55.944: INFO: Lookups using dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] Apr 26 21:55:00.908: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:55:00.912: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:55:00.915: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:55:00.918: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:55:00.948: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:55:00.950: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:55:00.954: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:55:00.956: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726: the server could not find the requested resource (get pods dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726) Apr 26 21:55:00.963: INFO: Lookups using dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] Apr 26 21:55:05.943: INFO: DNS probes using dns-6979/dns-test-4b2a07c5-2e98-4d18-96e0-85fa94218726 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:55:06.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6979" for this suite. • [SLOW TEST:36.842 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":160,"skipped":2571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:55:06.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:55:06.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a568b2a2-c864-46c7-8c47-4ca311513c59" in namespace "downward-api-3238" to be "success or failure" Apr 26 21:55:06.744: INFO: Pod "downwardapi-volume-a568b2a2-c864-46c7-8c47-4ca311513c59": Phase="Pending", Reason="", readiness=false. Elapsed: 10.265119ms Apr 26 21:55:08.755: INFO: Pod "downwardapi-volume-a568b2a2-c864-46c7-8c47-4ca311513c59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021537981s Apr 26 21:55:10.759: INFO: Pod "downwardapi-volume-a568b2a2-c864-46c7-8c47-4ca311513c59": Phase="Running", Reason="", readiness=true. Elapsed: 4.025658591s Apr 26 21:55:12.773: INFO: Pod "downwardapi-volume-a568b2a2-c864-46c7-8c47-4ca311513c59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039644702s STEP: Saw pod success Apr 26 21:55:12.773: INFO: Pod "downwardapi-volume-a568b2a2-c864-46c7-8c47-4ca311513c59" satisfied condition "success or failure" Apr 26 21:55:12.784: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a568b2a2-c864-46c7-8c47-4ca311513c59 container client-container: STEP: delete the pod Apr 26 21:55:12.821: INFO: Waiting for pod downwardapi-volume-a568b2a2-c864-46c7-8c47-4ca311513c59 to disappear Apr 26 21:55:12.832: INFO: Pod downwardapi-volume-a568b2a2-c864-46c7-8c47-4ca311513c59 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:55:12.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3238" for this suite. • [SLOW TEST:6.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2598,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:55:12.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-5783fbc3-ce89-415e-bc11-31cf6e9db620 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:55:12.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6246" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":162,"skipped":2605,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:55:12.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-0172859f-362d-4e20-8c52-d318d6657bb5 STEP: Creating a pod to test consume secrets Apr 26 21:55:12.978: INFO: Waiting up to 5m0s for pod "pod-secrets-7197e038-92f6-42ac-8797-3c2cce5dc327" in namespace "secrets-5412" to be "success or failure" Apr 26 21:55:12.993: INFO: Pod "pod-secrets-7197e038-92f6-42ac-8797-3c2cce5dc327": Phase="Pending", Reason="", readiness=false. Elapsed: 15.430323ms Apr 26 21:55:14.997: INFO: Pod "pod-secrets-7197e038-92f6-42ac-8797-3c2cce5dc327": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019345161s Apr 26 21:55:17.002: INFO: Pod "pod-secrets-7197e038-92f6-42ac-8797-3c2cce5dc327": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023627102s STEP: Saw pod success Apr 26 21:55:17.002: INFO: Pod "pod-secrets-7197e038-92f6-42ac-8797-3c2cce5dc327" satisfied condition "success or failure" Apr 26 21:55:17.005: INFO: Trying to get logs from node jerma-worker pod pod-secrets-7197e038-92f6-42ac-8797-3c2cce5dc327 container secret-volume-test: STEP: delete the pod Apr 26 21:55:17.056: INFO: Waiting for pod pod-secrets-7197e038-92f6-42ac-8797-3c2cce5dc327 to disappear Apr 26 21:55:17.060: INFO: Pod pod-secrets-7197e038-92f6-42ac-8797-3c2cce5dc327 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:55:17.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5412" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:55:17.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-7971 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7971 STEP: Deleting pre-stop pod Apr 26 21:55:30.172: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:55:30.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7971" for this suite. • [SLOW TEST:13.130 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":164,"skipped":2629,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:55:30.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:55:45.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3171" for this suite. STEP: Destroying namespace "nsdeletetest-4837" for this suite. Apr 26 21:55:45.749: INFO: Namespace nsdeletetest-4837 was already deleted STEP: Destroying namespace "nsdeletetest-4303" for this suite. • [SLOW TEST:15.555 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":165,"skipped":2631,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:55:45.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 26 21:55:45.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3766' Apr 26 21:55:48.469: INFO: stderr: "" Apr 26 21:55:48.469: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 26 21:55:53.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3766 -o json' Apr 26 21:55:53.612: INFO: stderr: "" Apr 26 21:55:53.612: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-26T21:55:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3766\",\n \"resourceVersion\": \"11290986\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3766/pods/e2e-test-httpd-pod\",\n \"uid\": \"c3679564-937f-492c-808f-a8972196f544\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-g2jrw\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-g2jrw\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-g2jrw\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-26T21:55:48Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-26T21:55:51Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-26T21:55:51Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-26T21:55:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://05d9fca6d22b7531491e10d14f856e3da5f744cfa6e6a07432e57f02af1b3c2a\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-26T21:55:50Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.151\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.151\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-26T21:55:48Z\"\n }\n}\n" STEP: replace the image in the pod Apr 26 21:55:53.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3766' Apr 26 21:55:53.931: INFO: stderr: "" Apr 26 21:55:53.931: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Apr 26 21:55:53.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3766' Apr 26 21:55:59.485: INFO: stderr: "" Apr 26 21:55:59.485: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:55:59.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3766" for this suite. • [SLOW TEST:13.739 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":166,"skipped":2641,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:55:59.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4039, will wait for the garbage collector to delete the pods Apr 26 21:56:03.629: INFO: Deleting Job.batch foo took: 5.205355ms Apr 26 21:56:03.930: INFO: Terminating Job.batch foo pods took: 300.410065ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:56:49.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4039" for this suite. • [SLOW TEST:49.776 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":167,"skipped":2649,"failed":0} SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:56:49.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-5aeb4ddf-9845-4e1c-9e9e-26a372536eb3 STEP: Creating secret with name secret-projected-all-test-volume-56c3392e-cbc3-4251-ab23-e17a61e1be6b STEP: Creating a pod to test Check all projections for projected volume plugin Apr 26 21:56:49.402: INFO: Waiting up to 5m0s for pod "projected-volume-c38472b5-9a21-4f80-9081-0d52742373c7" in namespace "projected-351" to be "success or failure" Apr 26 21:56:49.406: INFO: Pod "projected-volume-c38472b5-9a21-4f80-9081-0d52742373c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075718ms Apr 26 21:56:51.446: INFO: Pod "projected-volume-c38472b5-9a21-4f80-9081-0d52742373c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043336648s Apr 26 21:56:53.450: INFO: Pod "projected-volume-c38472b5-9a21-4f80-9081-0d52742373c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047514759s STEP: Saw pod success Apr 26 21:56:53.450: INFO: Pod "projected-volume-c38472b5-9a21-4f80-9081-0d52742373c7" satisfied condition "success or failure" Apr 26 21:56:53.453: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-c38472b5-9a21-4f80-9081-0d52742373c7 container projected-all-volume-test: STEP: delete the pod Apr 26 21:56:53.535: INFO: Waiting for pod projected-volume-c38472b5-9a21-4f80-9081-0d52742373c7 to disappear Apr 26 21:56:53.562: INFO: Pod projected-volume-c38472b5-9a21-4f80-9081-0d52742373c7 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:56:53.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-351" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2652,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:56:53.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:56:53.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31b24ddd-d483-49a9-b38b-098f85fc6eee" in namespace "projected-9349" to be "success or failure" Apr 26 21:56:53.695: INFO: Pod "downwardapi-volume-31b24ddd-d483-49a9-b38b-098f85fc6eee": Phase="Pending", Reason="", readiness=false. Elapsed: 18.585077ms Apr 26 21:56:55.781: INFO: Pod "downwardapi-volume-31b24ddd-d483-49a9-b38b-098f85fc6eee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105045531s Apr 26 21:56:57.785: INFO: Pod "downwardapi-volume-31b24ddd-d483-49a9-b38b-098f85fc6eee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108784758s STEP: Saw pod success Apr 26 21:56:57.785: INFO: Pod "downwardapi-volume-31b24ddd-d483-49a9-b38b-098f85fc6eee" satisfied condition "success or failure" Apr 26 21:56:57.788: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-31b24ddd-d483-49a9-b38b-098f85fc6eee container client-container: STEP: delete the pod Apr 26 21:56:57.824: INFO: Waiting for pod downwardapi-volume-31b24ddd-d483-49a9-b38b-098f85fc6eee to disappear Apr 26 21:56:57.828: INFO: Pod downwardapi-volume-31b24ddd-d483-49a9-b38b-098f85fc6eee no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:56:57.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9349" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2654,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:56:57.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:56:57.908: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 26 21:56:59.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3368 create -f -' Apr 26 21:57:02.918: INFO: stderr: "" Apr 26 21:57:02.918: INFO: stdout: "e2e-test-crd-publish-openapi-9852-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 26 21:57:02.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3368 delete e2e-test-crd-publish-openapi-9852-crds test-cr' Apr 26 21:57:03.050: INFO: stderr: "" Apr 26 21:57:03.050: INFO: stdout: "e2e-test-crd-publish-openapi-9852-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 26 21:57:03.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3368 apply -f -' Apr 26 21:57:03.290: INFO: stderr: "" Apr 26 21:57:03.290: INFO: stdout: "e2e-test-crd-publish-openapi-9852-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 26 21:57:03.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3368 delete e2e-test-crd-publish-openapi-9852-crds test-cr' Apr 26 21:57:03.408: INFO: stderr: "" Apr 26 21:57:03.408: INFO: stdout: "e2e-test-crd-publish-openapi-9852-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 26 21:57:03.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9852-crds' Apr 26 21:57:03.677: INFO: stderr: "" Apr 26 21:57:03.677: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9852-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:57:05.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3368" for this suite. • [SLOW TEST:7.798 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":170,"skipped":2667,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:57:05.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 26 21:57:05.732: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2014 /api/v1/namespaces/watch-2014/configmaps/e2e-watch-test-label-changed c2cc102c-6621-4e59-9bee-5cb7047be92a 11291353 0 2020-04-26 21:57:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 26 21:57:05.732: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2014 /api/v1/namespaces/watch-2014/configmaps/e2e-watch-test-label-changed c2cc102c-6621-4e59-9bee-5cb7047be92a 11291354 0 2020-04-26 21:57:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 26 21:57:05.733: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2014 /api/v1/namespaces/watch-2014/configmaps/e2e-watch-test-label-changed c2cc102c-6621-4e59-9bee-5cb7047be92a 11291355 0 2020-04-26 21:57:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 26 21:57:15.828: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2014 /api/v1/namespaces/watch-2014/configmaps/e2e-watch-test-label-changed c2cc102c-6621-4e59-9bee-5cb7047be92a 11291391 0 2020-04-26 21:57:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 26 21:57:15.828: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2014 /api/v1/namespaces/watch-2014/configmaps/e2e-watch-test-label-changed c2cc102c-6621-4e59-9bee-5cb7047be92a 11291392 0 2020-04-26 21:57:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 26 21:57:15.828: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2014 /api/v1/namespaces/watch-2014/configmaps/e2e-watch-test-label-changed c2cc102c-6621-4e59-9bee-5cb7047be92a 11291393 0 2020-04-26 21:57:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:57:15.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2014" for this suite. • [SLOW TEST:10.202 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":171,"skipped":2683,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:57:15.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:57:15.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6110429-229f-4fa7-957a-3a07605394bb" in namespace "projected-4697" to be "success or failure" Apr 26 21:57:15.898: INFO: Pod "downwardapi-volume-b6110429-229f-4fa7-957a-3a07605394bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156667ms Apr 26 21:57:17.902: INFO: Pod "downwardapi-volume-b6110429-229f-4fa7-957a-3a07605394bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00775996s Apr 26 21:57:19.906: INFO: Pod "downwardapi-volume-b6110429-229f-4fa7-957a-3a07605394bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012325993s STEP: Saw pod success Apr 26 21:57:19.906: INFO: Pod "downwardapi-volume-b6110429-229f-4fa7-957a-3a07605394bb" satisfied condition "success or failure" Apr 26 21:57:19.910: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-b6110429-229f-4fa7-957a-3a07605394bb container client-container: STEP: delete the pod Apr 26 21:57:19.930: INFO: Waiting for pod downwardapi-volume-b6110429-229f-4fa7-957a-3a07605394bb to disappear Apr 26 21:57:19.947: INFO: Pod downwardapi-volume-b6110429-229f-4fa7-957a-3a07605394bb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:57:19.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4697" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:57:19.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-jrbc STEP: Creating a pod to test atomic-volume-subpath Apr 26 21:57:20.068: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jrbc" in namespace "subpath-4176" to be "success or failure" Apr 26 21:57:20.072: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024566ms Apr 26 21:57:22.076: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007857907s Apr 26 21:57:24.080: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Running", Reason="", readiness=true. Elapsed: 4.012030962s Apr 26 21:57:26.084: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Running", Reason="", readiness=true. Elapsed: 6.015484792s Apr 26 21:57:28.088: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Running", Reason="", readiness=true. Elapsed: 8.019556372s Apr 26 21:57:30.091: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Running", Reason="", readiness=true. Elapsed: 10.023442209s Apr 26 21:57:32.096: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Running", Reason="", readiness=true. Elapsed: 12.027730408s Apr 26 21:57:34.117: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Running", Reason="", readiness=true. Elapsed: 14.048718754s Apr 26 21:57:36.120: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Running", Reason="", readiness=true. Elapsed: 16.052250921s Apr 26 21:57:38.124: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Running", Reason="", readiness=true. Elapsed: 18.055680555s Apr 26 21:57:40.128: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Running", Reason="", readiness=true. Elapsed: 20.0596729s Apr 26 21:57:42.132: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Running", Reason="", readiness=true. Elapsed: 22.063918292s Apr 26 21:57:44.136: INFO: Pod "pod-subpath-test-configmap-jrbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.067884696s STEP: Saw pod success Apr 26 21:57:44.136: INFO: Pod "pod-subpath-test-configmap-jrbc" satisfied condition "success or failure" Apr 26 21:57:44.139: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-jrbc container test-container-subpath-configmap-jrbc: STEP: delete the pod Apr 26 21:57:44.166: INFO: Waiting for pod pod-subpath-test-configmap-jrbc to disappear Apr 26 21:57:44.170: INFO: Pod pod-subpath-test-configmap-jrbc no longer exists STEP: Deleting pod pod-subpath-test-configmap-jrbc Apr 26 21:57:44.170: INFO: Deleting pod "pod-subpath-test-configmap-jrbc" in namespace "subpath-4176" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:57:44.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4176" for this suite. • [SLOW TEST:24.229 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":173,"skipped":2713,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:57:44.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-97b87753-5c4f-406a-a965-4d0d89025996 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-97b87753-5c4f-406a-a965-4d0d89025996 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:57:50.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9229" for this suite. • [SLOW TEST:6.139 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2730,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:57:50.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-73e65cb6-3fc5-4c5d-9337-97ca52f5ac4e STEP: Creating a pod to test consume secrets Apr 26 21:57:50.467: INFO: Waiting up to 5m0s for pod "pod-secrets-bcd1afa9-6225-4c8d-b6ea-cbb9e5a3ef9e" in namespace "secrets-1778" to be "success or failure" Apr 26 21:57:50.480: INFO: Pod "pod-secrets-bcd1afa9-6225-4c8d-b6ea-cbb9e5a3ef9e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.811173ms Apr 26 21:57:52.484: INFO: Pod "pod-secrets-bcd1afa9-6225-4c8d-b6ea-cbb9e5a3ef9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016832656s Apr 26 21:57:54.488: INFO: Pod "pod-secrets-bcd1afa9-6225-4c8d-b6ea-cbb9e5a3ef9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021110322s STEP: Saw pod success Apr 26 21:57:54.488: INFO: Pod "pod-secrets-bcd1afa9-6225-4c8d-b6ea-cbb9e5a3ef9e" satisfied condition "success or failure" Apr 26 21:57:54.492: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-bcd1afa9-6225-4c8d-b6ea-cbb9e5a3ef9e container secret-volume-test: STEP: delete the pod Apr 26 21:57:54.511: INFO: Waiting for pod pod-secrets-bcd1afa9-6225-4c8d-b6ea-cbb9e5a3ef9e to disappear Apr 26 21:57:54.516: INFO: Pod pod-secrets-bcd1afa9-6225-4c8d-b6ea-cbb9e5a3ef9e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:57:54.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1778" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:57:54.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 26 21:57:54.651: INFO: Waiting up to 5m0s for pod "downward-api-bd8ad37d-4077-4c93-aa27-1f7ad4f31622" in namespace "downward-api-2012" to be "success or failure" Apr 26 21:57:54.660: INFO: Pod "downward-api-bd8ad37d-4077-4c93-aa27-1f7ad4f31622": Phase="Pending", Reason="", readiness=false. Elapsed: 9.019402ms Apr 26 21:57:56.921: INFO: Pod "downward-api-bd8ad37d-4077-4c93-aa27-1f7ad4f31622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27012958s Apr 26 21:57:58.926: INFO: Pod "downward-api-bd8ad37d-4077-4c93-aa27-1f7ad4f31622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.274957005s STEP: Saw pod success Apr 26 21:57:58.926: INFO: Pod "downward-api-bd8ad37d-4077-4c93-aa27-1f7ad4f31622" satisfied condition "success or failure" Apr 26 21:57:58.929: INFO: Trying to get logs from node jerma-worker pod downward-api-bd8ad37d-4077-4c93-aa27-1f7ad4f31622 container dapi-container: STEP: delete the pod Apr 26 21:57:58.993: INFO: Waiting for pod downward-api-bd8ad37d-4077-4c93-aa27-1f7ad4f31622 to disappear Apr 26 21:57:58.997: INFO: Pod downward-api-bd8ad37d-4077-4c93-aa27-1f7ad4f31622 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:57:58.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2012" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:57:59.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:57:59.055: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 26 21:58:00.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5672 create -f -' Apr 26 21:58:04.162: INFO: stderr: "" Apr 26 21:58:04.162: INFO: stdout: "e2e-test-crd-publish-openapi-6474-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 26 21:58:04.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5672 delete e2e-test-crd-publish-openapi-6474-crds test-foo' Apr 26 21:58:04.285: INFO: stderr: "" Apr 26 21:58:04.285: INFO: stdout: "e2e-test-crd-publish-openapi-6474-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 26 21:58:04.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5672 apply -f -' Apr 26 21:58:04.566: INFO: stderr: "" Apr 26 21:58:04.566: INFO: stdout: "e2e-test-crd-publish-openapi-6474-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 26 21:58:04.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5672 delete e2e-test-crd-publish-openapi-6474-crds test-foo' Apr 26 21:58:04.686: INFO: stderr: "" Apr 26 21:58:04.686: INFO: stdout: "e2e-test-crd-publish-openapi-6474-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 26 21:58:04.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5672 create -f -' Apr 26 21:58:04.924: INFO: rc: 1 Apr 26 21:58:04.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5672 apply -f -' Apr 26 21:58:05.172: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 26 21:58:05.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5672 create -f -' Apr 26 21:58:05.401: INFO: rc: 1 Apr 26 21:58:05.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5672 apply -f -' Apr 26 21:58:05.668: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 26 21:58:05.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6474-crds' Apr 26 21:58:05.892: INFO: stderr: "" Apr 26 21:58:05.892: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6474-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 26 21:58:05.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6474-crds.metadata' Apr 26 21:58:06.135: INFO: stderr: "" Apr 26 21:58:06.135: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6474-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 26 21:58:06.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6474-crds.spec' Apr 26 21:58:06.370: INFO: stderr: "" Apr 26 21:58:06.370: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6474-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 26 21:58:06.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6474-crds.spec.bars' Apr 26 21:58:06.626: INFO: stderr: "" Apr 26 21:58:06.626: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6474-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 26 21:58:06.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6474-crds.spec.bars2' Apr 26 21:58:06.884: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:58:09.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5672" for this suite. • [SLOW TEST:10.789 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":177,"skipped":2825,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:58:09.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 26 21:58:10.516: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 26 21:58:12.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535090, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535090, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535090, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535090, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 21:58:15.562: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:58:15.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:58:16.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7412" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.148 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":178,"skipped":2833,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:58:16.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:58:17.038: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 26 21:58:17.046: INFO: Number of nodes with available pods: 0 Apr 26 21:58:17.046: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 26 21:58:17.090: INFO: Number of nodes with available pods: 0 Apr 26 21:58:17.090: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:18.094: INFO: Number of nodes with available pods: 0 Apr 26 21:58:18.094: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:19.130: INFO: Number of nodes with available pods: 0 Apr 26 21:58:19.130: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:20.094: INFO: Number of nodes with available pods: 0 Apr 26 21:58:20.094: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:21.094: INFO: Number of nodes with available pods: 1 Apr 26 21:58:21.095: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 26 21:58:21.149: INFO: Number of nodes with available pods: 1 Apr 26 21:58:21.149: INFO: Number of running nodes: 0, number of available pods: 1 Apr 26 21:58:22.153: INFO: Number of nodes with available pods: 0 Apr 26 21:58:22.153: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 26 21:58:22.160: INFO: Number of nodes with available pods: 0 Apr 26 21:58:22.160: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:23.164: INFO: Number of nodes with available pods: 0 Apr 26 21:58:23.164: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:24.164: INFO: Number of nodes with available pods: 0 Apr 26 21:58:24.164: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:25.165: INFO: Number of nodes with available pods: 0 Apr 26 21:58:25.165: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:26.164: INFO: Number of nodes with available pods: 0 Apr 26 21:58:26.164: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:27.163: INFO: Number of nodes with available pods: 0 Apr 26 21:58:27.163: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:28.164: INFO: Number of nodes with available pods: 0 Apr 26 21:58:28.164: INFO: Node jerma-worker is running more than one daemon pod Apr 26 21:58:29.164: INFO: Number of nodes with available pods: 1 Apr 26 21:58:29.164: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5450, will wait for the garbage collector to delete the pods Apr 26 21:58:29.227: INFO: Deleting DaemonSet.extensions daemon-set took: 5.954007ms Apr 26 21:58:31.327: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.100282638s Apr 26 21:58:39.331: INFO: Number of nodes with available pods: 0 Apr 26 21:58:39.331: INFO: Number of running nodes: 0, number of available pods: 0 Apr 26 21:58:39.334: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5450/daemonsets","resourceVersion":"11291907"},"items":null} Apr 26 21:58:39.336: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5450/pods","resourceVersion":"11291907"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:58:39.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5450" for this suite. • [SLOW TEST:22.458 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":179,"skipped":2883,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:58:39.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-19f567d0-4e08-48c7-b6b3-e7707a7efef7 STEP: Creating a pod to test consume secrets Apr 26 21:58:39.492: INFO: Waiting up to 5m0s for pod "pod-secrets-831d12be-a7c0-4d20-aeea-a096d8f5d625" in namespace "secrets-5689" to be "success or failure" Apr 26 21:58:39.506: INFO: Pod "pod-secrets-831d12be-a7c0-4d20-aeea-a096d8f5d625": Phase="Pending", Reason="", readiness=false. Elapsed: 14.422068ms Apr 26 21:58:41.511: INFO: Pod "pod-secrets-831d12be-a7c0-4d20-aeea-a096d8f5d625": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018688464s Apr 26 21:58:43.515: INFO: Pod "pod-secrets-831d12be-a7c0-4d20-aeea-a096d8f5d625": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022939668s STEP: Saw pod success Apr 26 21:58:43.515: INFO: Pod "pod-secrets-831d12be-a7c0-4d20-aeea-a096d8f5d625" satisfied condition "success or failure" Apr 26 21:58:43.518: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-831d12be-a7c0-4d20-aeea-a096d8f5d625 container secret-volume-test: STEP: delete the pod Apr 26 21:58:43.539: INFO: Waiting for pod pod-secrets-831d12be-a7c0-4d20-aeea-a096d8f5d625 to disappear Apr 26 21:58:43.555: INFO: Pod pod-secrets-831d12be-a7c0-4d20-aeea-a096d8f5d625 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:58:43.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5689" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2893,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:58:43.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5482 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5482;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5482 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5482;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5482.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5482.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5482.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5482.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5482.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5482.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5482.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5482.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5482.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5482.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5482.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 247.2.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.2.247_udp@PTR;check="$$(dig +tcp +noall +answer +search 247.2.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.2.247_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5482 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5482;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5482 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5482;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5482.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5482.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5482.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5482.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5482.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5482.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5482.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5482.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5482.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5482.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5482.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5482.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 247.2.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.2.247_udp@PTR;check="$$(dig +tcp +noall +answer +search 247.2.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.2.247_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 21:58:49.699: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.703: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.712: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.714: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.717: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.720: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.742: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.745: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.748: INFO: Unable to read jessie_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.751: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.755: INFO: Unable to read jessie_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.758: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.761: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.764: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:49.781: INFO: Lookups using dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5482 wheezy_tcp@dns-test-service.dns-5482 wheezy_udp@dns-test-service.dns-5482.svc wheezy_tcp@dns-test-service.dns-5482.svc wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5482 jessie_tcp@dns-test-service.dns-5482 jessie_udp@dns-test-service.dns-5482.svc jessie_tcp@dns-test-service.dns-5482.svc jessie_udp@_http._tcp.dns-test-service.dns-5482.svc jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc] Apr 26 21:58:54.786: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.790: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.793: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.796: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.800: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.804: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.808: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.810: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.835: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.838: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.840: INFO: Unable to read jessie_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.844: INFO: Unable to read jessie_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.846: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.848: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.850: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:54.864: INFO: Lookups using dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5482 wheezy_tcp@dns-test-service.dns-5482 wheezy_udp@dns-test-service.dns-5482.svc wheezy_tcp@dns-test-service.dns-5482.svc wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5482 jessie_tcp@dns-test-service.dns-5482 jessie_udp@dns-test-service.dns-5482.svc jessie_tcp@dns-test-service.dns-5482.svc jessie_udp@_http._tcp.dns-test-service.dns-5482.svc jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc] Apr 26 21:58:59.786: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.789: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.792: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.795: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.798: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.801: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.804: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.807: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.842: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.845: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.848: INFO: Unable to read jessie_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.853: INFO: Unable to read jessie_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.856: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.859: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.861: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:58:59.878: INFO: Lookups using dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5482 wheezy_tcp@dns-test-service.dns-5482 wheezy_udp@dns-test-service.dns-5482.svc wheezy_tcp@dns-test-service.dns-5482.svc wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5482 jessie_tcp@dns-test-service.dns-5482 jessie_udp@dns-test-service.dns-5482.svc jessie_tcp@dns-test-service.dns-5482.svc jessie_udp@_http._tcp.dns-test-service.dns-5482.svc jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc] Apr 26 21:59:04.790: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.794: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.797: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.800: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.804: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.808: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.811: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.813: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.831: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.833: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.835: INFO: Unable to read jessie_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.838: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.840: INFO: Unable to read jessie_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.843: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.845: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.848: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:04.866: INFO: Lookups using dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5482 wheezy_tcp@dns-test-service.dns-5482 wheezy_udp@dns-test-service.dns-5482.svc wheezy_tcp@dns-test-service.dns-5482.svc wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5482 jessie_tcp@dns-test-service.dns-5482 jessie_udp@dns-test-service.dns-5482.svc jessie_tcp@dns-test-service.dns-5482.svc jessie_udp@_http._tcp.dns-test-service.dns-5482.svc jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc] Apr 26 21:59:09.786: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.790: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.794: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.798: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.801: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.805: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.808: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.811: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.834: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.837: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.841: INFO: Unable to read jessie_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.845: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.849: INFO: Unable to read jessie_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.854: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.855: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:09.866: INFO: Lookups using dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5482 wheezy_tcp@dns-test-service.dns-5482 wheezy_udp@dns-test-service.dns-5482.svc wheezy_tcp@dns-test-service.dns-5482.svc wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5482 jessie_tcp@dns-test-service.dns-5482 jessie_udp@dns-test-service.dns-5482.svc jessie_tcp@dns-test-service.dns-5482.svc jessie_udp@_http._tcp.dns-test-service.dns-5482.svc jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc] Apr 26 21:59:14.785: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.787: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.790: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.792: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.795: INFO: Unable to read wheezy_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.798: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.801: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.804: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.822: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.825: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.827: INFO: Unable to read jessie_udp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.830: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482 from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.833: INFO: Unable to read jessie_udp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.835: INFO: Unable to read jessie_tcp@dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.838: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.840: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc from pod dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825: the server could not find the requested resource (get pods dns-test-99407afa-503d-4577-a08b-793a062b5825) Apr 26 21:59:14.875: INFO: Lookups using dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5482 wheezy_tcp@dns-test-service.dns-5482 wheezy_udp@dns-test-service.dns-5482.svc wheezy_tcp@dns-test-service.dns-5482.svc wheezy_udp@_http._tcp.dns-test-service.dns-5482.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5482.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5482 jessie_tcp@dns-test-service.dns-5482 jessie_udp@dns-test-service.dns-5482.svc jessie_tcp@dns-test-service.dns-5482.svc jessie_udp@_http._tcp.dns-test-service.dns-5482.svc jessie_tcp@_http._tcp.dns-test-service.dns-5482.svc] Apr 26 21:59:19.869: INFO: DNS probes using dns-5482/dns-test-99407afa-503d-4577-a08b-793a062b5825 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:59:20.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5482" for this suite. • [SLOW TEST:36.825 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":181,"skipped":2901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:59:20.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 21:59:20.559: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ad30c6a-b971-45f5-b0d1-cfd86ff1db75" in namespace "projected-9230" to be "success or failure" Apr 26 21:59:20.569: INFO: Pod "downwardapi-volume-5ad30c6a-b971-45f5-b0d1-cfd86ff1db75": Phase="Pending", Reason="", readiness=false. Elapsed: 10.382728ms Apr 26 21:59:22.598: INFO: Pod "downwardapi-volume-5ad30c6a-b971-45f5-b0d1-cfd86ff1db75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038859131s Apr 26 21:59:24.610: INFO: Pod "downwardapi-volume-5ad30c6a-b971-45f5-b0d1-cfd86ff1db75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05083947s STEP: Saw pod success Apr 26 21:59:24.610: INFO: Pod "downwardapi-volume-5ad30c6a-b971-45f5-b0d1-cfd86ff1db75" satisfied condition "success or failure" Apr 26 21:59:24.612: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5ad30c6a-b971-45f5-b0d1-cfd86ff1db75 container client-container: STEP: delete the pod Apr 26 21:59:24.649: INFO: Waiting for pod downwardapi-volume-5ad30c6a-b971-45f5-b0d1-cfd86ff1db75 to disappear Apr 26 21:59:24.681: INFO: Pod downwardapi-volume-5ad30c6a-b971-45f5-b0d1-cfd86ff1db75 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:59:24.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9230" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2931,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:59:24.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2020 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2020 I0426 21:59:25.191921 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2020, replica count: 2 I0426 21:59:28.242379 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 21:59:31.242597 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 26 21:59:31.242: INFO: Creating new exec pod Apr 26 21:59:36.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2020 execpodtrtq9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 26 21:59:36.493: INFO: stderr: "I0426 21:59:36.393922 2587 log.go:172] (0xc0009fa000) (0xc00096c000) Create stream\nI0426 21:59:36.393982 2587 log.go:172] (0xc0009fa000) (0xc00096c000) Stream added, broadcasting: 1\nI0426 21:59:36.396524 2587 log.go:172] (0xc0009fa000) Reply frame received for 1\nI0426 21:59:36.396587 2587 log.go:172] (0xc0009fa000) (0xc0008b8000) Create stream\nI0426 21:59:36.396606 2587 log.go:172] (0xc0009fa000) (0xc0008b8000) Stream added, broadcasting: 3\nI0426 21:59:36.397941 2587 log.go:172] (0xc0009fa000) Reply frame received for 3\nI0426 21:59:36.397991 2587 log.go:172] (0xc0009fa000) (0xc0008b80a0) Create stream\nI0426 21:59:36.398009 2587 log.go:172] (0xc0009fa000) (0xc0008b80a0) Stream added, broadcasting: 5\nI0426 21:59:36.399094 2587 log.go:172] (0xc0009fa000) Reply frame received for 5\nI0426 21:59:36.485629 2587 log.go:172] (0xc0009fa000) Data frame received for 5\nI0426 21:59:36.485669 2587 log.go:172] (0xc0009fa000) Data frame received for 3\nI0426 21:59:36.485689 2587 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0426 21:59:36.485709 2587 log.go:172] (0xc0008b80a0) (5) Data frame handling\nI0426 21:59:36.485730 2587 log.go:172] (0xc0008b80a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0426 21:59:36.485937 2587 log.go:172] (0xc0009fa000) Data frame received for 5\nI0426 21:59:36.486053 2587 log.go:172] (0xc0008b80a0) (5) Data frame handling\nI0426 21:59:36.487747 2587 log.go:172] (0xc0009fa000) Data frame received for 1\nI0426 21:59:36.487759 2587 log.go:172] (0xc00096c000) (1) Data frame handling\nI0426 21:59:36.487766 2587 log.go:172] (0xc00096c000) (1) Data frame sent\nI0426 21:59:36.487856 2587 log.go:172] (0xc0009fa000) (0xc00096c000) Stream removed, broadcasting: 1\nI0426 21:59:36.487879 2587 log.go:172] (0xc0009fa000) Go away received\nI0426 21:59:36.488166 2587 log.go:172] (0xc0009fa000) (0xc00096c000) Stream removed, broadcasting: 1\nI0426 21:59:36.488188 2587 log.go:172] (0xc0009fa000) (0xc0008b8000) Stream removed, broadcasting: 3\nI0426 21:59:36.488198 2587 log.go:172] (0xc0009fa000) (0xc0008b80a0) Stream removed, broadcasting: 5\n" Apr 26 21:59:36.494: INFO: stdout: "" Apr 26 21:59:36.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2020 execpodtrtq9 -- /bin/sh -x -c nc -zv -t -w 2 10.100.244.83 80' Apr 26 21:59:36.705: INFO: stderr: "I0426 21:59:36.620123 2608 log.go:172] (0xc0009d2000) (0xc0006bda40) Create stream\nI0426 21:59:36.620205 2608 log.go:172] (0xc0009d2000) (0xc0006bda40) Stream added, broadcasting: 1\nI0426 21:59:36.622618 2608 log.go:172] (0xc0009d2000) Reply frame received for 1\nI0426 21:59:36.622654 2608 log.go:172] (0xc0009d2000) (0xc0006bdc20) Create stream\nI0426 21:59:36.622671 2608 log.go:172] (0xc0009d2000) (0xc0006bdc20) Stream added, broadcasting: 3\nI0426 21:59:36.623638 2608 log.go:172] (0xc0009d2000) Reply frame received for 3\nI0426 21:59:36.623674 2608 log.go:172] (0xc0009d2000) (0xc00099c000) Create stream\nI0426 21:59:36.623686 2608 log.go:172] (0xc0009d2000) (0xc00099c000) Stream added, broadcasting: 5\nI0426 21:59:36.624638 2608 log.go:172] (0xc0009d2000) Reply frame received for 5\nI0426 21:59:36.698311 2608 log.go:172] (0xc0009d2000) Data frame received for 3\nI0426 21:59:36.698349 2608 log.go:172] (0xc0006bdc20) (3) Data frame handling\nI0426 21:59:36.698394 2608 log.go:172] (0xc0009d2000) Data frame received for 5\nI0426 21:59:36.698429 2608 log.go:172] (0xc00099c000) (5) Data frame handling\nI0426 21:59:36.698460 2608 log.go:172] (0xc00099c000) (5) Data frame sent\n+ nc -zv -t -w 2 10.100.244.83 80\nConnection to 10.100.244.83 80 port [tcp/http] succeeded!\nI0426 21:59:36.698607 2608 log.go:172] (0xc0009d2000) Data frame received for 5\nI0426 21:59:36.698634 2608 log.go:172] (0xc00099c000) (5) Data frame handling\nI0426 21:59:36.700263 2608 log.go:172] (0xc0009d2000) Data frame received for 1\nI0426 21:59:36.700284 2608 log.go:172] (0xc0006bda40) (1) Data frame handling\nI0426 21:59:36.700307 2608 log.go:172] (0xc0006bda40) (1) Data frame sent\nI0426 21:59:36.700328 2608 log.go:172] (0xc0009d2000) (0xc0006bda40) Stream removed, broadcasting: 1\nI0426 21:59:36.700411 2608 log.go:172] (0xc0009d2000) Go away received\nI0426 21:59:36.700854 2608 log.go:172] (0xc0009d2000) (0xc0006bda40) Stream removed, broadcasting: 1\nI0426 21:59:36.700876 2608 log.go:172] (0xc0009d2000) (0xc0006bdc20) Stream removed, broadcasting: 3\nI0426 21:59:36.700895 2608 log.go:172] (0xc0009d2000) (0xc00099c000) Stream removed, broadcasting: 5\n" Apr 26 21:59:36.706: INFO: stdout: "" Apr 26 21:59:36.706: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:59:36.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2020" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.069 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":183,"skipped":2956,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:59:36.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:59:36.832: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 26 21:59:41.848: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 26 21:59:41.848: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 26 21:59:45.921: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3278 /apis/apps/v1/namespaces/deployment-3278/deployments/test-cleanup-deployment e8d802e6-e2a1-4855-a18e-2e46a5ff6466 11292345 1 2020-04-26 21:59:41 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004664808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-26 21:59:41 +0000 UTC,LastTransitionTime:2020-04-26 21:59:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-04-26 21:59:45 +0000 UTC,LastTransitionTime:2020-04-26 21:59:41 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 26 21:59:45.924: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-3278 /apis/apps/v1/namespaces/deployment-3278/replicasets/test-cleanup-deployment-55ffc6b7b6 012e9900-59cc-4aa6-be72-219975b6ba1b 11292334 1 2020-04-26 21:59:41 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment e8d802e6-e2a1-4855-a18e-2e46a5ff6466 0xc004e7abe7 0xc004e7abe8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004e7ac68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 26 21:59:45.928: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-rcqcp" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-rcqcp test-cleanup-deployment-55ffc6b7b6- deployment-3278 /api/v1/namespaces/deployment-3278/pods/test-cleanup-deployment-55ffc6b7b6-rcqcp 3b206f8b-d086-42b8-8b2c-f888d155edc2 11292333 0 2020-04-26 21:59:41 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 012e9900-59cc-4aa6-be72-219975b6ba1b 0xc004f37427 0xc004f37428}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-glx7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-glx7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-glx7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.253,StartTime:2020-04-26 21:59:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 21:59:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://80308e313c28812471a36f82a9ea56b90abed5a0281f3ce1480f624112404759,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.253,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:59:45.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3278" for this suite. • [SLOW TEST:9.171 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":184,"skipped":2978,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:59:45.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:59:46.110: INFO: Waiting up to 5m0s for pod "busybox-user-65534-5376dc32-1e93-4994-82b6-10ad6b526967" in namespace "security-context-test-6418" to be "success or failure" Apr 26 21:59:46.126: INFO: Pod "busybox-user-65534-5376dc32-1e93-4994-82b6-10ad6b526967": Phase="Pending", Reason="", readiness=false. Elapsed: 15.929772ms Apr 26 21:59:48.130: INFO: Pod "busybox-user-65534-5376dc32-1e93-4994-82b6-10ad6b526967": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020163886s Apr 26 21:59:50.135: INFO: Pod "busybox-user-65534-5376dc32-1e93-4994-82b6-10ad6b526967": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024446353s Apr 26 21:59:50.135: INFO: Pod "busybox-user-65534-5376dc32-1e93-4994-82b6-10ad6b526967" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:59:50.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6418" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2997,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:59:50.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 21:59:56.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9777" for this suite. STEP: Destroying namespace "nsdeletetest-6021" for this suite. Apr 26 21:59:56.454: INFO: Namespace nsdeletetest-6021 was already deleted STEP: Destroying namespace "nsdeletetest-6344" for this suite. • [SLOW TEST:6.314 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":186,"skipped":3005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 21:59:56.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 21:59:56.539: INFO: Creating deployment "webserver-deployment" Apr 26 21:59:56.543: INFO: Waiting for observed generation 1 Apr 26 21:59:58.766: INFO: Waiting for all required pods to come up Apr 26 21:59:58.770: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 26 22:00:08.945: INFO: Waiting for deployment "webserver-deployment" to complete Apr 26 22:00:08.952: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 26 22:00:08.957: INFO: Updating deployment webserver-deployment Apr 26 22:00:08.957: INFO: Waiting for observed generation 2 Apr 26 22:00:10.969: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 26 22:00:10.973: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 26 22:00:10.975: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 26 22:00:10.981: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 26 22:00:10.981: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 26 22:00:10.983: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 26 22:00:10.988: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 26 22:00:10.988: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 26 22:00:10.992: INFO: Updating deployment webserver-deployment Apr 26 22:00:10.992: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 26 22:00:11.158: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 26 22:00:11.181: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 26 22:00:11.325: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9604 /apis/apps/v1/namespaces/deployment-9604/deployments/webserver-deployment d7a65e94-6db1-4b41-9352-225541e68169 11292688 3 2020-04-26 21:59:56 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030904e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-26 22:00:09 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-26 22:00:11 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 26 22:00:11.473: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-9604 /apis/apps/v1/namespaces/deployment-9604/replicasets/webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 11292734 3 2020-04-26 22:00:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment d7a65e94-6db1-4b41-9352-225541e68169 0xc0041bcfe7 0xc0041bcfe8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041bd058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 26 22:00:11.473: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 26 22:00:11.474: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-9604 /apis/apps/v1/namespaces/deployment-9604/replicasets/webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 11292735 3 2020-04-26 21:59:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment d7a65e94-6db1-4b41-9352-225541e68169 0xc0041bcf27 0xc0041bcf28}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041bcf88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 26 22:00:11.546: INFO: Pod "webserver-deployment-595b5b9587-2c4v8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2c4v8 webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-2c4v8 a7c94d7a-b42b-4f3e-9154-351bee1ff7d2 11292718 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0041bd507 0xc0041bd508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.546: INFO: Pod "webserver-deployment-595b5b9587-4tf76" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4tf76 webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-4tf76 a9df1949-cf89-4160-a485-4f8949adc254 11292724 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0041bd627 0xc0041bd628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.547: INFO: Pod "webserver-deployment-595b5b9587-5mrxx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5mrxx webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-5mrxx 50d6e676-6463-4f0a-85e9-f8e4fbad556a 11292580 0 2020-04-26 21:59:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0041bd747 0xc0041bd748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.164,StartTime:2020-04-26 21:59:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 22:00:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b4f67f1a8fbf825eeff5343a3d6f0101ebb8cb1d8bc020bbce53fb41932c2e07,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.547: INFO: Pod "webserver-deployment-595b5b9587-5zqmk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5zqmk webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-5zqmk 859de054-8c82-4fda-afaf-311f5a00144f 11292722 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0041bd8c7 0xc0041bd8c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.547: INFO: Pod "webserver-deployment-595b5b9587-7b44h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7b44h webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-7b44h 6d2a57da-41ab-4b3a-bcf5-06884802bf82 11292691 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0041bd9e7 0xc0041bd9e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.547: INFO: Pod "webserver-deployment-595b5b9587-9fkl4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9fkl4 webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-9fkl4 d3fd0300-facc-4b53-9ab3-290837284a6d 11292584 0 2020-04-26 21:59:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0041bdb07 0xc0041bdb08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.165,StartTime:2020-04-26 21:59:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 22:00:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f583d3194cc04af0df6cd93584d4fa1489ef658fc66091cb2e075ec06b0262ff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.547: INFO: Pod "webserver-deployment-595b5b9587-b9st8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b9st8 webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-b9st8 afebb516-34e9-4437-b69e-a136b540b28e 11292713 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0041bdc87 0xc0041bdc88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.547: INFO: Pod "webserver-deployment-595b5b9587-cssm6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cssm6 webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-cssm6 8ff5d970-ce69-499a-a78b-7b0acb37301c 11292534 0 2020-04-26 21:59:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0041bdda7 0xc0041bdda8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.254,StartTime:2020-04-26 21:59:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 22:00:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1e5248dd4555ecc3459253394b664f1c34b298b189e307074323b4521f56e7fb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.254,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.548: INFO: Pod "webserver-deployment-595b5b9587-d89lx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d89lx webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-d89lx cbf56c94-acab-49a8-a6a1-304a0b323dd8 11292572 0 2020-04-26 21:59:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0041bdf27 0xc0041bdf28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.4,StartTime:2020-04-26 21:59:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 22:00:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d18890a9515876525aba60ce33de6c0b550fc9b1da0dad13c23c1563c90407c4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.548: INFO: Pod "webserver-deployment-595b5b9587-dk5mr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dk5mr webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-dk5mr 537157ce-a1f6-4cd9-9174-8c21ebe0f7c7 11292554 0 2020-04-26 21:59:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0039160b7 0xc0039160b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.3,StartTime:2020-04-26 21:59:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 22:00:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e8de6b4c707bdb2ef8996d33e0d367f2122ff65a906688b6b199df6416cad93c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.548: INFO: Pod "webserver-deployment-595b5b9587-fbqv6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fbqv6 webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-fbqv6 f7777f63-e3ba-47c1-a002-afa8bf23efde 11292562 0 2020-04-26 21:59:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc003916247 0xc003916248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.163,StartTime:2020-04-26 21:59:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 22:00:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://54e78bdded96f96d4d0723039be1d452e33fb0e11244ab35b4c67e09e8cb757a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.548: INFO: Pod "webserver-deployment-595b5b9587-gksw8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gksw8 webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-gksw8 498f0f12-be3c-40cd-99a6-3b9049ce3110 11292705 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0039163c7 0xc0039163c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.548: INFO: Pod "webserver-deployment-595b5b9587-jc8dr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jc8dr webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-jc8dr d11c94fa-11cc-4bf8-9c57-ff7d56ea983b 11292711 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc0039164e7 0xc0039164e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.548: INFO: Pod "webserver-deployment-595b5b9587-q8mjs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q8mjs webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-q8mjs a40f7926-e0a2-43d5-b115-65b55f805adf 11292690 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc003916607 0xc003916608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.548: INFO: Pod "webserver-deployment-595b5b9587-sxv76" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sxv76 webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-sxv76 e5b9cc69-c47b-4dec-9cd9-87f7d20140b9 11292727 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc003916727 0xc003916728}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.549: INFO: Pod "webserver-deployment-595b5b9587-tbldl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tbldl webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-tbldl 80d2a37e-5f92-4e44-83f7-09c15afda73c 11292729 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc003916847 0xc003916848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.549: INFO: Pod "webserver-deployment-595b5b9587-w8474" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w8474 webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-w8474 f0ca2561-f9b5-4eae-b552-d6639d350c52 11292733 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc003916967 0xc003916968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-26 22:00:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.549: INFO: Pod "webserver-deployment-595b5b9587-ww2bj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ww2bj webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-ww2bj 0d4d2987-cc18-424d-9965-9b9511180259 11292582 0 2020-04-26 21:59:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc003916ac7 0xc003916ac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.5,StartTime:2020-04-26 21:59:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 22:00:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1d92c611ccede0880d6a1e25d6f21e289a1a531a7739707e1fb897b664e2bea0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.549: INFO: Pod "webserver-deployment-595b5b9587-xclvh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xclvh webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-xclvh 02be7514-8732-4e65-942b-1e452843d36b 11292549 0 2020-04-26 21:59:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc003916c47 0xc003916c48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 21:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.2,StartTime:2020-04-26 21:59:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 22:00:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6781fcacad7958483c14cd3f6ec3e22038cb5d51124fd6299c1563b04cd1c009,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.549: INFO: Pod "webserver-deployment-595b5b9587-zqk7g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zqk7g webserver-deployment-595b5b9587- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-595b5b9587-zqk7g 2e39d51e-6eec-4bd3-b3cb-be6c0b3371d9 11292728 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2c8318f5-e103-4bbc-b7b3-51318bb74307 0xc003916dc7 0xc003916dc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.549: INFO: Pod "webserver-deployment-c7997dcc8-7dsf8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7dsf8 webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-7dsf8 8f484e91-67d2-4cf1-9188-e6b701fe78d1 11292716 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003916ee7 0xc003916ee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.550: INFO: Pod "webserver-deployment-c7997dcc8-7jptw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7jptw webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-7jptw ad975a80-2e21-4a62-83a6-1876b03565ea 11292662 0 2020-04-26 22:00:09 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003917017 0xc003917018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-26 22:00:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.550: INFO: Pod "webserver-deployment-c7997dcc8-8xpw4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8xpw4 webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-8xpw4 a099ba5f-6347-4ddb-9e25-83cb2d509ae4 11292645 0 2020-04-26 22:00:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003917197 0xc003917198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-26 22:00:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.550: INFO: Pod "webserver-deployment-c7997dcc8-bgq85" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bgq85 webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-bgq85 34d85f4a-cacd-4732-b784-efa35582fd9b 11292715 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003917317 0xc003917318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.550: INFO: Pod "webserver-deployment-c7997dcc8-bt9r4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bt9r4 webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-bt9r4 9b844ccf-3515-4ce5-b37b-cc391a910ec5 11292665 0 2020-04-26 22:00:09 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003917447 0xc003917448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-26 22:00:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.550: INFO: Pod "webserver-deployment-c7997dcc8-dzz6f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dzz6f webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-dzz6f 803299a6-72f8-4c3e-80c5-2dba72c6a701 11292640 0 2020-04-26 22:00:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc0039175c7 0xc0039175c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-26 22:00:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.550: INFO: Pod "webserver-deployment-c7997dcc8-k298q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k298q webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-k298q 2e8c9ae3-5f0a-4d93-ae0e-3fa09766a413 11292700 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003917747 0xc003917748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.550: INFO: Pod "webserver-deployment-c7997dcc8-ljgd9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ljgd9 webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-ljgd9 9ff0808d-717e-487a-8db0-22621a517f8a 11292717 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003917877 0xc003917878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.550: INFO: Pod "webserver-deployment-c7997dcc8-pc88w" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pc88w webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-pc88w de478f82-fe78-4dc7-b314-57746ca3f24d 11292726 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc0039179a7 0xc0039179a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.551: INFO: Pod "webserver-deployment-c7997dcc8-r8z6f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r8z6f webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-r8z6f 8d4126b0-b2cd-4ba2-8d89-3dca24af5684 11292736 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003917ad7 0xc003917ad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-26 22:00:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.551: INFO: Pod "webserver-deployment-c7997dcc8-rm9wp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rm9wp webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-rm9wp 5184e7dc-f68d-4d11-a725-cffa511de3ae 11292692 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003917c57 0xc003917c58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.551: INFO: Pod "webserver-deployment-c7997dcc8-vhxpz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vhxpz webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-vhxpz 6700b68e-cb5d-4766-b7b2-c8e3c9c4c247 11292661 0 2020-04-26 22:00:08 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003917d87 0xc003917d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-26 22:00:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 26 22:00:11.551: INFO: Pod "webserver-deployment-c7997dcc8-wwclc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wwclc webserver-deployment-c7997dcc8- deployment-9604 /api/v1/namespaces/deployment-9604/pods/webserver-deployment-c7997dcc8-wwclc 6aa92d45-2868-45fd-8217-872ccd447db6 11292714 0 2020-04-26 22:00:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c15b17a6-c280-43bc-95a7-8aae2ac0f760 0xc003917f07 0xc003917f08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szxqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szxqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szxqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:00:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:00:11.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9604" for this suite. • [SLOW TEST:15.212 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":187,"skipped":3046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:00:11.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 26 22:00:11.921: INFO: Waiting up to 5m0s for pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a" in namespace "emptydir-7365" to be "success or failure" Apr 26 22:00:11.924: INFO: Pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.236853ms Apr 26 22:00:13.991: INFO: Pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070426644s Apr 26 22:00:16.187: INFO: Pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266077874s Apr 26 22:00:18.337: INFO: Pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415951754s Apr 26 22:00:20.462: INFO: Pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541422626s Apr 26 22:00:22.762: INFO: Pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.841103685s Apr 26 22:00:25.090: INFO: Pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.169039939s Apr 26 22:00:27.289: INFO: Pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a": Phase="Running", Reason="", readiness=true. Elapsed: 15.368262571s Apr 26 22:00:29.362: INFO: Pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.441120771s STEP: Saw pod success Apr 26 22:00:29.362: INFO: Pod "pod-3153ce10-1216-4c85-bc37-78aeb807fd0a" satisfied condition "success or failure" Apr 26 22:00:29.695: INFO: Trying to get logs from node jerma-worker2 pod pod-3153ce10-1216-4c85-bc37-78aeb807fd0a container test-container: STEP: delete the pod Apr 26 22:00:30.276: INFO: Waiting for pod pod-3153ce10-1216-4c85-bc37-78aeb807fd0a to disappear Apr 26 22:00:30.289: INFO: Pod pod-3153ce10-1216-4c85-bc37-78aeb807fd0a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:00:30.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7365" for this suite. • [SLOW TEST:18.826 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3090,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:00:30.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 22:00:32.592: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 22:00:35.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535232, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535232, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535232, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535232, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:00:37.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535232, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535232, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535232, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535232, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 22:00:40.062: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:00:40.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6743" for this suite. STEP: Destroying namespace "webhook-6743-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.895 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":189,"skipped":3109,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:00:40.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-0a4fb006-b97a-4047-afde-6ebdb9b6ff21 STEP: Creating a pod to test consume configMaps Apr 26 22:00:41.358: INFO: Waiting up to 5m0s for pod "pod-configmaps-da350c39-9e71-4a6d-bb0c-d47dd5b73d93" in namespace "configmap-5697" to be "success or failure" Apr 26 22:00:41.430: INFO: Pod "pod-configmaps-da350c39-9e71-4a6d-bb0c-d47dd5b73d93": Phase="Pending", Reason="", readiness=false. Elapsed: 72.406247ms Apr 26 22:00:43.435: INFO: Pod "pod-configmaps-da350c39-9e71-4a6d-bb0c-d47dd5b73d93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076886866s Apr 26 22:00:45.467: INFO: Pod "pod-configmaps-da350c39-9e71-4a6d-bb0c-d47dd5b73d93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109142495s STEP: Saw pod success Apr 26 22:00:45.467: INFO: Pod "pod-configmaps-da350c39-9e71-4a6d-bb0c-d47dd5b73d93" satisfied condition "success or failure" Apr 26 22:00:45.506: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-da350c39-9e71-4a6d-bb0c-d47dd5b73d93 container configmap-volume-test: STEP: delete the pod Apr 26 22:00:45.538: INFO: Waiting for pod pod-configmaps-da350c39-9e71-4a6d-bb0c-d47dd5b73d93 to disappear Apr 26 22:00:45.548: INFO: Pod pod-configmaps-da350c39-9e71-4a6d-bb0c-d47dd5b73d93 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:00:45.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5697" for this suite. • [SLOW TEST:5.199 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3123,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:00:45.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 26 22:00:49.745: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 26 22:00:59.854: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:00:59.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1464" for this suite. • [SLOW TEST:14.299 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":191,"skipped":3138,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:00:59.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 26 22:00:59.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6699' Apr 26 22:01:00.048: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 26 22:01:00.048: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Apr 26 22:01:00.108: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-c7r6c] Apr 26 22:01:00.108: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-c7r6c" in namespace "kubectl-6699" to be "running and ready" Apr 26 22:01:00.117: INFO: Pod "e2e-test-httpd-rc-c7r6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.665306ms Apr 26 22:01:02.192: INFO: Pod "e2e-test-httpd-rc-c7r6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084122261s Apr 26 22:01:04.196: INFO: Pod "e2e-test-httpd-rc-c7r6c": Phase="Running", Reason="", readiness=true. Elapsed: 4.088235779s Apr 26 22:01:04.196: INFO: Pod "e2e-test-httpd-rc-c7r6c" satisfied condition "running and ready" Apr 26 22:01:04.196: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-c7r6c] Apr 26 22:01:04.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6699' Apr 26 22:01:04.338: INFO: stderr: "" Apr 26 22:01:04.338: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.183. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.183. Set the 'ServerName' directive globally to suppress this message\n[Sun Apr 26 22:01:02.922943 2020] [mpm_event:notice] [pid 1:tid 140629459962728] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Apr 26 22:01:02.922991 2020] [core:notice] [pid 1:tid 140629459962728] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Apr 26 22:01:04.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6699' Apr 26 22:01:04.445: INFO: stderr: "" Apr 26 22:01:04.445: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:01:04.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6699" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":192,"skipped":3151,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:01:04.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 26 22:01:04.568: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Apr 26 22:01:05.576: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 26 22:01:08.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535265, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535265, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:01:10.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535265, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535265, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:01:12.562: INFO: Waited 524.53597ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:01:13.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4278" for this suite. • [SLOW TEST:8.893 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":193,"skipped":3165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:01:13.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Apr 26 22:01:13.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7439' Apr 26 22:01:13.803: INFO: stderr: "" Apr 26 22:01:13.803: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 26 22:01:13.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7439' Apr 26 22:01:13.910: INFO: stderr: "" Apr 26 22:01:13.910: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Apr 26 22:01:18.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7439' Apr 26 22:01:19.007: INFO: stderr: "" Apr 26 22:01:19.007: INFO: stdout: "update-demo-nautilus-l72qh update-demo-nautilus-vzfr5 " Apr 26 22:01:19.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l72qh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7439' Apr 26 22:01:19.098: INFO: stderr: "" Apr 26 22:01:19.098: INFO: stdout: "true" Apr 26 22:01:19.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l72qh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7439' Apr 26 22:01:19.183: INFO: stderr: "" Apr 26 22:01:19.183: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 22:01:19.183: INFO: validating pod update-demo-nautilus-l72qh Apr 26 22:01:19.188: INFO: got data: { "image": "nautilus.jpg" } Apr 26 22:01:19.188: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 22:01:19.188: INFO: update-demo-nautilus-l72qh is verified up and running Apr 26 22:01:19.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzfr5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7439' Apr 26 22:01:19.281: INFO: stderr: "" Apr 26 22:01:19.281: INFO: stdout: "true" Apr 26 22:01:19.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzfr5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7439' Apr 26 22:01:19.367: INFO: stderr: "" Apr 26 22:01:19.367: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 22:01:19.367: INFO: validating pod update-demo-nautilus-vzfr5 Apr 26 22:01:19.371: INFO: got data: { "image": "nautilus.jpg" } Apr 26 22:01:19.371: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 22:01:19.371: INFO: update-demo-nautilus-vzfr5 is verified up and running STEP: rolling-update to new replication controller Apr 26 22:01:19.374: INFO: scanned /root for discovery docs: Apr 26 22:01:19.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7439' Apr 26 22:01:42.036: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 26 22:01:42.036: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 26 22:01:42.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7439' Apr 26 22:01:42.161: INFO: stderr: "" Apr 26 22:01:42.162: INFO: stdout: "update-demo-kitten-htmr8 update-demo-kitten-zkdjb " Apr 26 22:01:42.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-htmr8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7439' Apr 26 22:01:42.278: INFO: stderr: "" Apr 26 22:01:42.278: INFO: stdout: "true" Apr 26 22:01:42.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-htmr8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7439' Apr 26 22:01:42.370: INFO: stderr: "" Apr 26 22:01:42.370: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 26 22:01:42.370: INFO: validating pod update-demo-kitten-htmr8 Apr 26 22:01:42.374: INFO: got data: { "image": "kitten.jpg" } Apr 26 22:01:42.374: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 26 22:01:42.375: INFO: update-demo-kitten-htmr8 is verified up and running Apr 26 22:01:42.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zkdjb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7439' Apr 26 22:01:42.475: INFO: stderr: "" Apr 26 22:01:42.475: INFO: stdout: "true" Apr 26 22:01:42.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zkdjb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7439' Apr 26 22:01:42.566: INFO: stderr: "" Apr 26 22:01:42.566: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 26 22:01:42.566: INFO: validating pod update-demo-kitten-zkdjb Apr 26 22:01:42.569: INFO: got data: { "image": "kitten.jpg" } Apr 26 22:01:42.569: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 26 22:01:42.569: INFO: update-demo-kitten-zkdjb is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:01:42.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7439" for this suite. • [SLOW TEST:29.231 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":194,"skipped":3207,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:01:42.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 26 22:01:42.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6116' Apr 26 22:01:42.876: INFO: stderr: "" Apr 26 22:01:42.876: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 26 22:01:42.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6116' Apr 26 22:01:43.001: INFO: stderr: "" Apr 26 22:01:43.001: INFO: stdout: "update-demo-nautilus-hstfh update-demo-nautilus-nwmt7 " Apr 26 22:01:43.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hstfh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:01:43.106: INFO: stderr: "" Apr 26 22:01:43.106: INFO: stdout: "" Apr 26 22:01:43.106: INFO: update-demo-nautilus-hstfh is created but not running Apr 26 22:01:48.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6116' Apr 26 22:01:48.246: INFO: stderr: "" Apr 26 22:01:48.246: INFO: stdout: "update-demo-nautilus-hstfh update-demo-nautilus-nwmt7 " Apr 26 22:01:48.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hstfh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:01:48.469: INFO: stderr: "" Apr 26 22:01:48.469: INFO: stdout: "true" Apr 26 22:01:48.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hstfh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:01:48.566: INFO: stderr: "" Apr 26 22:01:48.566: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 22:01:48.566: INFO: validating pod update-demo-nautilus-hstfh Apr 26 22:01:48.571: INFO: got data: { "image": "nautilus.jpg" } Apr 26 22:01:48.571: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 22:01:48.571: INFO: update-demo-nautilus-hstfh is verified up and running Apr 26 22:01:48.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nwmt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:01:48.675: INFO: stderr: "" Apr 26 22:01:48.675: INFO: stdout: "true" Apr 26 22:01:48.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nwmt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:01:48.767: INFO: stderr: "" Apr 26 22:01:48.767: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 22:01:48.767: INFO: validating pod update-demo-nautilus-nwmt7 Apr 26 22:01:48.771: INFO: got data: { "image": "nautilus.jpg" } Apr 26 22:01:48.771: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 22:01:48.771: INFO: update-demo-nautilus-nwmt7 is verified up and running STEP: scaling down the replication controller Apr 26 22:01:48.773: INFO: scanned /root for discovery docs: Apr 26 22:01:48.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6116' Apr 26 22:01:49.934: INFO: stderr: "" Apr 26 22:01:49.934: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 26 22:01:49.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6116' Apr 26 22:01:50.043: INFO: stderr: "" Apr 26 22:01:50.043: INFO: stdout: "update-demo-nautilus-hstfh update-demo-nautilus-nwmt7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 26 22:01:55.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6116' Apr 26 22:01:55.146: INFO: stderr: "" Apr 26 22:01:55.146: INFO: stdout: "update-demo-nautilus-hstfh update-demo-nautilus-nwmt7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 26 22:02:00.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6116' Apr 26 22:02:00.249: INFO: stderr: "" Apr 26 22:02:00.249: INFO: stdout: "update-demo-nautilus-nwmt7 " Apr 26 22:02:00.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nwmt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:02:00.345: INFO: stderr: "" Apr 26 22:02:00.345: INFO: stdout: "true" Apr 26 22:02:00.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nwmt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:02:00.433: INFO: stderr: "" Apr 26 22:02:00.433: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 22:02:00.433: INFO: validating pod update-demo-nautilus-nwmt7 Apr 26 22:02:00.436: INFO: got data: { "image": "nautilus.jpg" } Apr 26 22:02:00.436: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 22:02:00.436: INFO: update-demo-nautilus-nwmt7 is verified up and running STEP: scaling up the replication controller Apr 26 22:02:00.438: INFO: scanned /root for discovery docs: Apr 26 22:02:00.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6116' Apr 26 22:02:01.550: INFO: stderr: "" Apr 26 22:02:01.550: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 26 22:02:01.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6116' Apr 26 22:02:01.657: INFO: stderr: "" Apr 26 22:02:01.657: INFO: stdout: "update-demo-nautilus-nwmt7 update-demo-nautilus-qv7mw " Apr 26 22:02:01.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nwmt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:02:01.750: INFO: stderr: "" Apr 26 22:02:01.750: INFO: stdout: "true" Apr 26 22:02:01.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nwmt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:02:01.865: INFO: stderr: "" Apr 26 22:02:01.865: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 22:02:01.865: INFO: validating pod update-demo-nautilus-nwmt7 Apr 26 22:02:01.868: INFO: got data: { "image": "nautilus.jpg" } Apr 26 22:02:01.868: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 22:02:01.868: INFO: update-demo-nautilus-nwmt7 is verified up and running Apr 26 22:02:01.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qv7mw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:02:01.959: INFO: stderr: "" Apr 26 22:02:01.959: INFO: stdout: "" Apr 26 22:02:01.959: INFO: update-demo-nautilus-qv7mw is created but not running Apr 26 22:02:06.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6116' Apr 26 22:02:07.081: INFO: stderr: "" Apr 26 22:02:07.081: INFO: stdout: "update-demo-nautilus-nwmt7 update-demo-nautilus-qv7mw " Apr 26 22:02:07.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nwmt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:02:07.193: INFO: stderr: "" Apr 26 22:02:07.193: INFO: stdout: "true" Apr 26 22:02:07.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nwmt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:02:07.300: INFO: stderr: "" Apr 26 22:02:07.300: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 22:02:07.300: INFO: validating pod update-demo-nautilus-nwmt7 Apr 26 22:02:07.304: INFO: got data: { "image": "nautilus.jpg" } Apr 26 22:02:07.305: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 22:02:07.305: INFO: update-demo-nautilus-nwmt7 is verified up and running Apr 26 22:02:07.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qv7mw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:02:07.412: INFO: stderr: "" Apr 26 22:02:07.412: INFO: stdout: "true" Apr 26 22:02:07.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qv7mw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6116' Apr 26 22:02:07.511: INFO: stderr: "" Apr 26 22:02:07.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 26 22:02:07.511: INFO: validating pod update-demo-nautilus-qv7mw Apr 26 22:02:07.515: INFO: got data: { "image": "nautilus.jpg" } Apr 26 22:02:07.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 26 22:02:07.515: INFO: update-demo-nautilus-qv7mw is verified up and running STEP: using delete to clean up resources Apr 26 22:02:07.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6116' Apr 26 22:02:07.626: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 22:02:07.626: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 26 22:02:07.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6116' Apr 26 22:02:07.742: INFO: stderr: "No resources found in kubectl-6116 namespace.\n" Apr 26 22:02:07.742: INFO: stdout: "" Apr 26 22:02:07.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6116 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 26 22:02:07.844: INFO: stderr: "" Apr 26 22:02:07.844: INFO: stdout: "update-demo-nautilus-nwmt7\nupdate-demo-nautilus-qv7mw\n" Apr 26 22:02:08.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6116' Apr 26 22:02:08.447: INFO: stderr: "No resources found in kubectl-6116 namespace.\n" Apr 26 22:02:08.447: INFO: stdout: "" Apr 26 22:02:08.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6116 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 26 22:02:08.553: INFO: stderr: "" Apr 26 22:02:08.553: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:02:08.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6116" for this suite. • [SLOW TEST:25.984 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":195,"skipped":3217,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:02:08.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-b64ea8d1-2f86-4cda-a490-1c68171dfe08 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:02:14.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3677" for this suite. • [SLOW TEST:6.226 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3223,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:02:14.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-410cac90-60b0-4c73-a60e-9a538ba06dcd STEP: Creating secret with name s-test-opt-upd-20c1aa8e-45fb-4cf9-b1a9-a1d7d13bf4ac STEP: Creating the pod STEP: Deleting secret s-test-opt-del-410cac90-60b0-4c73-a60e-9a538ba06dcd STEP: Updating secret s-test-opt-upd-20c1aa8e-45fb-4cf9-b1a9-a1d7d13bf4ac STEP: Creating secret with name s-test-opt-create-6b087965-f590-4cee-a47e-cee9e2b0a11e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:02:23.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7018" for this suite. • [SLOW TEST:8.301 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3243,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:02:23.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-d9e69a59-8b1b-460c-a3a3-f28bdf42f70d Apr 26 22:02:23.176: INFO: Pod name my-hostname-basic-d9e69a59-8b1b-460c-a3a3-f28bdf42f70d: Found 0 pods out of 1 Apr 26 22:02:28.211: INFO: Pod name my-hostname-basic-d9e69a59-8b1b-460c-a3a3-f28bdf42f70d: Found 1 pods out of 1 Apr 26 22:02:28.211: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d9e69a59-8b1b-460c-a3a3-f28bdf42f70d" are running Apr 26 22:02:28.222: INFO: Pod "my-hostname-basic-d9e69a59-8b1b-460c-a3a3-f28bdf42f70d-mpt7j" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 22:02:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 22:02:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 22:02:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 22:02:23 +0000 UTC Reason: Message:}]) Apr 26 22:02:28.222: INFO: Trying to dial the pod Apr 26 22:02:33.229: INFO: Controller my-hostname-basic-d9e69a59-8b1b-460c-a3a3-f28bdf42f70d: Got expected result from replica 1 [my-hostname-basic-d9e69a59-8b1b-460c-a3a3-f28bdf42f70d-mpt7j]: "my-hostname-basic-d9e69a59-8b1b-460c-a3a3-f28bdf42f70d-mpt7j", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:02:33.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4175" for this suite. • [SLOW TEST:10.145 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":198,"skipped":3248,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:02:33.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:02:37.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7529" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3249,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:02:37.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:03:03.414: INFO: Container started at 2020-04-26 22:02:39 +0000 UTC, pod became ready at 2020-04-26 22:03:02 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:03:03.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9194" for this suite. • [SLOW TEST:26.108 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3251,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:03:03.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 26 22:03:03.475: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 26 22:03:13.977: INFO: >>> kubeConfig: /root/.kube/config Apr 26 22:03:16.894: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:03:26.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6700" for this suite. • [SLOW TEST:23.103 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":201,"skipped":3262,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:03:26.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-6f3e2e88-afe5-4219-8646-fea19f1fcbb5 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-6f3e2e88-afe5-4219-8646-fea19f1fcbb5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:03:34.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4243" for this suite. • [SLOW TEST:8.160 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3267,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:03:34.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:03:34.786: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 26 22:03:36.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2587 create -f -' Apr 26 22:03:39.717: INFO: stderr: "" Apr 26 22:03:39.717: INFO: stdout: "e2e-test-crd-publish-openapi-8366-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 26 22:03:39.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2587 delete e2e-test-crd-publish-openapi-8366-crds test-cr' Apr 26 22:03:39.822: INFO: stderr: "" Apr 26 22:03:39.822: INFO: stdout: "e2e-test-crd-publish-openapi-8366-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 26 22:03:39.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2587 apply -f -' Apr 26 22:03:40.451: INFO: stderr: "" Apr 26 22:03:40.451: INFO: stdout: "e2e-test-crd-publish-openapi-8366-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 26 22:03:40.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2587 delete e2e-test-crd-publish-openapi-8366-crds test-cr' Apr 26 22:03:40.557: INFO: stderr: "" Apr 26 22:03:40.557: INFO: stdout: "e2e-test-crd-publish-openapi-8366-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 26 22:03:40.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8366-crds' Apr 26 22:03:41.141: INFO: stderr: "" Apr 26 22:03:41.141: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8366-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:03:43.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2587" for this suite. • [SLOW TEST:8.451 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":203,"skipped":3282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:03:43.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-fa6de43c-f0a1-4ee8-936f-791e669dc4e2 STEP: Creating secret with name s-test-opt-upd-1934b18a-ca71-4a01-a46d-52317e03c320 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-fa6de43c-f0a1-4ee8-936f-791e669dc4e2 STEP: Updating secret s-test-opt-upd-1934b18a-ca71-4a01-a46d-52317e03c320 STEP: Creating secret with name s-test-opt-create-970db621-0c9b-4341-8a6f-323cfd23ea40 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:04:59.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9811" for this suite. • [SLOW TEST:76.571 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3334,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:04:59.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 26 22:04:59.783: INFO: Waiting up to 5m0s for pod "pod-411eb9be-f1c7-44f3-bdfe-a30c8d4bc72f" in namespace "emptydir-5475" to be "success or failure" Apr 26 22:04:59.786: INFO: Pod "pod-411eb9be-f1c7-44f3-bdfe-a30c8d4bc72f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.114729ms Apr 26 22:05:01.790: INFO: Pod "pod-411eb9be-f1c7-44f3-bdfe-a30c8d4bc72f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007000501s Apr 26 22:05:03.795: INFO: Pod "pod-411eb9be-f1c7-44f3-bdfe-a30c8d4bc72f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011573148s STEP: Saw pod success Apr 26 22:05:03.795: INFO: Pod "pod-411eb9be-f1c7-44f3-bdfe-a30c8d4bc72f" satisfied condition "success or failure" Apr 26 22:05:03.798: INFO: Trying to get logs from node jerma-worker2 pod pod-411eb9be-f1c7-44f3-bdfe-a30c8d4bc72f container test-container: STEP: delete the pod Apr 26 22:05:03.841: INFO: Waiting for pod pod-411eb9be-f1c7-44f3-bdfe-a30c8d4bc72f to disappear Apr 26 22:05:03.846: INFO: Pod pod-411eb9be-f1c7-44f3-bdfe-a30c8d4bc72f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:05:03.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5475" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:05:03.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 22:05:03.920: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bbacb35-3826-4b8b-b4a5-8c2417deb96d" in namespace "projected-7348" to be "success or failure" Apr 26 22:05:03.924: INFO: Pod "downwardapi-volume-4bbacb35-3826-4b8b-b4a5-8c2417deb96d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471428ms Apr 26 22:05:05.967: INFO: Pod "downwardapi-volume-4bbacb35-3826-4b8b-b4a5-8c2417deb96d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047852886s Apr 26 22:05:07.972: INFO: Pod "downwardapi-volume-4bbacb35-3826-4b8b-b4a5-8c2417deb96d": Phase="Running", Reason="", readiness=true. Elapsed: 4.052580299s Apr 26 22:05:09.976: INFO: Pod "downwardapi-volume-4bbacb35-3826-4b8b-b4a5-8c2417deb96d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056569256s STEP: Saw pod success Apr 26 22:05:09.976: INFO: Pod "downwardapi-volume-4bbacb35-3826-4b8b-b4a5-8c2417deb96d" satisfied condition "success or failure" Apr 26 22:05:09.980: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4bbacb35-3826-4b8b-b4a5-8c2417deb96d container client-container: STEP: delete the pod Apr 26 22:05:09.999: INFO: Waiting for pod downwardapi-volume-4bbacb35-3826-4b8b-b4a5-8c2417deb96d to disappear Apr 26 22:05:10.003: INFO: Pod downwardapi-volume-4bbacb35-3826-4b8b-b4a5-8c2417deb96d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:05:10.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7348" for this suite. • [SLOW TEST:6.158 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:05:10.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 26 22:05:10.064: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 26 22:05:10.098: INFO: Waiting for terminating namespaces to be deleted... Apr 26 22:05:10.101: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 26 22:05:10.106: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 22:05:10.106: INFO: Container kindnet-cni ready: true, restart count 0 Apr 26 22:05:10.106: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 22:05:10.106: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 22:05:10.106: INFO: pod-projected-secrets-a042038b-28e6-43b4-a591-7f848e9d8652 from projected-9811 started at 2020-04-26 22:03:43 +0000 UTC (3 container statuses recorded) Apr 26 22:05:10.106: INFO: Container creates-volume-test ready: false, restart count 0 Apr 26 22:05:10.106: INFO: Container dels-volume-test ready: false, restart count 0 Apr 26 22:05:10.106: INFO: Container upds-volume-test ready: false, restart count 0 Apr 26 22:05:10.106: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 26 22:05:10.111: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 22:05:10.111: INFO: Container kube-proxy ready: true, restart count 0 Apr 26 22:05:10.111: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 26 22:05:10.111: INFO: Container kube-hunter ready: false, restart count 0 Apr 26 22:05:10.111: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 26 22:05:10.111: INFO: Container kube-bench ready: false, restart count 0 Apr 26 22:05:10.111: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 26 22:05:10.111: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-69ab346c-ef7d-4bcc-92e4-823de5f86aad 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-69ab346c-ef7d-4bcc-92e4-823de5f86aad off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-69ab346c-ef7d-4bcc-92e4-823de5f86aad [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:05:26.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3024" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.309 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":207,"skipped":3464,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:05:26.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:05:26.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 26 22:05:26.590: INFO: stderr: "" Apr 26 22:05:26.590: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:48:13Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:05:26.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5435" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":208,"skipped":3486,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:05:26.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:05:26.765: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"654d7b07-ae53-41c3-a507-9c9e2ffb7421", Controller:(*bool)(0xc00452dbb2), BlockOwnerDeletion:(*bool)(0xc00452dbb3)}} Apr 26 22:05:26.783: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"abb40464-9b6d-47e2-b5ae-75d4d66d5e66", Controller:(*bool)(0xc00458f012), BlockOwnerDeletion:(*bool)(0xc00458f013)}} Apr 26 22:05:26.836: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9fc2f875-d2b0-4450-9d5b-cfb6b16611bc", Controller:(*bool)(0xc0045bbe0a), BlockOwnerDeletion:(*bool)(0xc0045bbe0b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:05:31.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6610" for this suite. • [SLOW TEST:5.311 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":209,"skipped":3492,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:05:31.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:05:31.992: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 26 22:05:35.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1427 create -f -' Apr 26 22:05:38.237: INFO: stderr: "" Apr 26 22:05:38.237: INFO: stdout: "e2e-test-crd-publish-openapi-4210-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 26 22:05:38.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1427 delete e2e-test-crd-publish-openapi-4210-crds test-cr' Apr 26 22:05:38.354: INFO: stderr: "" Apr 26 22:05:38.354: INFO: stdout: "e2e-test-crd-publish-openapi-4210-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 26 22:05:38.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1427 apply -f -' Apr 26 22:05:38.621: INFO: stderr: "" Apr 26 22:05:38.621: INFO: stdout: "e2e-test-crd-publish-openapi-4210-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 26 22:05:38.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1427 delete e2e-test-crd-publish-openapi-4210-crds test-cr' Apr 26 22:05:38.763: INFO: stderr: "" Apr 26 22:05:38.763: INFO: stdout: "e2e-test-crd-publish-openapi-4210-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 26 22:05:38.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4210-crds' Apr 26 22:05:39.026: INFO: stderr: "" Apr 26 22:05:39.026: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4210-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:05:41.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1427" for this suite. • [SLOW TEST:10.038 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":210,"skipped":3506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:05:41.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6141 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 26 22:05:42.034: INFO: Found 0 stateful pods, waiting for 3 Apr 26 22:05:52.039: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:05:52.039: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:05:52.039: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 26 22:06:02.039: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:06:02.039: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:06:02.039: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 26 22:06:02.067: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 26 22:06:12.103: INFO: Updating stateful set ss2 Apr 26 22:06:12.119: INFO: Waiting for Pod statefulset-6141/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 26 22:06:22.515: INFO: Found 2 stateful pods, waiting for 3 Apr 26 22:06:32.520: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:06:32.520: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:06:32.520: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 26 22:06:32.544: INFO: Updating stateful set ss2 Apr 26 22:06:32.600: INFO: Waiting for Pod statefulset-6141/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 26 22:06:42.624: INFO: Updating stateful set ss2 Apr 26 22:06:42.667: INFO: Waiting for StatefulSet statefulset-6141/ss2 to complete update Apr 26 22:06:42.667: INFO: Waiting for Pod statefulset-6141/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 26 22:06:52.676: INFO: Waiting for StatefulSet statefulset-6141/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 26 22:07:02.676: INFO: Deleting all statefulset in ns statefulset-6141 Apr 26 22:07:02.679: INFO: Scaling statefulset ss2 to 0 Apr 26 22:07:22.718: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 22:07:22.738: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:07:22.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6141" for this suite. • [SLOW TEST:100.824 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":211,"skipped":3540,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:07:22.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8656 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8656 I0426 22:07:22.982058 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8656, replica count: 2 I0426 22:07:26.032534 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 22:07:29.032777 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 26 22:07:29.032: INFO: Creating new exec pod Apr 26 22:07:34.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8656 execpodk6t98 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 26 22:07:34.312: INFO: stderr: "I0426 22:07:34.208883 3818 log.go:172] (0xc0008c8000) (0xc0002f3400) Create stream\nI0426 22:07:34.208943 3818 log.go:172] (0xc0008c8000) (0xc0002f3400) Stream added, broadcasting: 1\nI0426 22:07:34.211420 3818 log.go:172] (0xc0008c8000) Reply frame received for 1\nI0426 22:07:34.211463 3818 log.go:172] (0xc0008c8000) (0xc0008be000) Create stream\nI0426 22:07:34.211481 3818 log.go:172] (0xc0008c8000) (0xc0008be000) Stream added, broadcasting: 3\nI0426 22:07:34.212307 3818 log.go:172] (0xc0008c8000) Reply frame received for 3\nI0426 22:07:34.212346 3818 log.go:172] (0xc0008c8000) (0xc0008be0a0) Create stream\nI0426 22:07:34.212357 3818 log.go:172] (0xc0008c8000) (0xc0008be0a0) Stream added, broadcasting: 5\nI0426 22:07:34.213540 3818 log.go:172] (0xc0008c8000) Reply frame received for 5\nI0426 22:07:34.305998 3818 log.go:172] (0xc0008c8000) Data frame received for 5\nI0426 22:07:34.306030 3818 log.go:172] (0xc0008be0a0) (5) Data frame handling\nI0426 22:07:34.306040 3818 log.go:172] (0xc0008be0a0) (5) Data frame sent\nI0426 22:07:34.306045 3818 log.go:172] (0xc0008c8000) Data frame received for 5\nI0426 22:07:34.306049 3818 log.go:172] (0xc0008be0a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0426 22:07:34.306069 3818 log.go:172] (0xc0008c8000) Data frame received for 3\nI0426 22:07:34.306077 3818 log.go:172] (0xc0008be000) (3) Data frame handling\nI0426 22:07:34.307332 3818 log.go:172] (0xc0008c8000) Data frame received for 1\nI0426 22:07:34.307349 3818 log.go:172] (0xc0002f3400) (1) Data frame handling\nI0426 22:07:34.307356 3818 log.go:172] (0xc0002f3400) (1) Data frame sent\nI0426 22:07:34.307369 3818 log.go:172] (0xc0008c8000) (0xc0002f3400) Stream removed, broadcasting: 1\nI0426 22:07:34.307396 3818 log.go:172] (0xc0008c8000) Go away received\nI0426 22:07:34.307663 3818 log.go:172] (0xc0008c8000) (0xc0002f3400) Stream removed, broadcasting: 1\nI0426 22:07:34.307675 3818 log.go:172] (0xc0008c8000) (0xc0008be000) Stream removed, broadcasting: 3\nI0426 22:07:34.307681 3818 log.go:172] (0xc0008c8000) (0xc0008be0a0) Stream removed, broadcasting: 5\n" Apr 26 22:07:34.312: INFO: stdout: "" Apr 26 22:07:34.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8656 execpodk6t98 -- /bin/sh -x -c nc -zv -t -w 2 10.106.233.65 80' Apr 26 22:07:34.513: INFO: stderr: "I0426 22:07:34.439558 3839 log.go:172] (0xc000aca2c0) (0xc0002b5ae0) Create stream\nI0426 22:07:34.439640 3839 log.go:172] (0xc000aca2c0) (0xc0002b5ae0) Stream added, broadcasting: 1\nI0426 22:07:34.442253 3839 log.go:172] (0xc000aca2c0) Reply frame received for 1\nI0426 22:07:34.442333 3839 log.go:172] (0xc000aca2c0) (0xc0002b5b80) Create stream\nI0426 22:07:34.442361 3839 log.go:172] (0xc000aca2c0) (0xc0002b5b80) Stream added, broadcasting: 3\nI0426 22:07:34.443457 3839 log.go:172] (0xc000aca2c0) Reply frame received for 3\nI0426 22:07:34.443485 3839 log.go:172] (0xc000aca2c0) (0xc0002b5c20) Create stream\nI0426 22:07:34.443491 3839 log.go:172] (0xc000aca2c0) (0xc0002b5c20) Stream added, broadcasting: 5\nI0426 22:07:34.444486 3839 log.go:172] (0xc000aca2c0) Reply frame received for 5\nI0426 22:07:34.506965 3839 log.go:172] (0xc000aca2c0) Data frame received for 3\nI0426 22:07:34.507006 3839 log.go:172] (0xc0002b5b80) (3) Data frame handling\nI0426 22:07:34.507096 3839 log.go:172] (0xc000aca2c0) Data frame received for 5\nI0426 22:07:34.507150 3839 log.go:172] (0xc0002b5c20) (5) Data frame handling\nI0426 22:07:34.507186 3839 log.go:172] (0xc0002b5c20) (5) Data frame sent\nI0426 22:07:34.507212 3839 log.go:172] (0xc000aca2c0) Data frame received for 5\nI0426 22:07:34.507227 3839 log.go:172] (0xc0002b5c20) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.233.65 80\nConnection to 10.106.233.65 80 port [tcp/http] succeeded!\nI0426 22:07:34.508680 3839 log.go:172] (0xc000aca2c0) Data frame received for 1\nI0426 22:07:34.508704 3839 log.go:172] (0xc0002b5ae0) (1) Data frame handling\nI0426 22:07:34.508738 3839 log.go:172] (0xc0002b5ae0) (1) Data frame sent\nI0426 22:07:34.508765 3839 log.go:172] (0xc000aca2c0) (0xc0002b5ae0) Stream removed, broadcasting: 1\nI0426 22:07:34.508792 3839 log.go:172] (0xc000aca2c0) Go away received\nI0426 22:07:34.509229 3839 log.go:172] (0xc000aca2c0) (0xc0002b5ae0) Stream removed, broadcasting: 1\nI0426 22:07:34.509260 3839 log.go:172] (0xc000aca2c0) (0xc0002b5b80) Stream removed, broadcasting: 3\nI0426 22:07:34.509274 3839 log.go:172] (0xc000aca2c0) (0xc0002b5c20) Stream removed, broadcasting: 5\n" Apr 26 22:07:34.514: INFO: stdout: "" Apr 26 22:07:34.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8656 execpodk6t98 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30030' Apr 26 22:07:34.728: INFO: stderr: "I0426 22:07:34.651882 3860 log.go:172] (0xc00099e0b0) (0xc000713d60) Create stream\nI0426 22:07:34.651962 3860 log.go:172] (0xc00099e0b0) (0xc000713d60) Stream added, broadcasting: 1\nI0426 22:07:34.655230 3860 log.go:172] (0xc00099e0b0) Reply frame received for 1\nI0426 22:07:34.655300 3860 log.go:172] (0xc00099e0b0) (0xc00067e820) Create stream\nI0426 22:07:34.655334 3860 log.go:172] (0xc00099e0b0) (0xc00067e820) Stream added, broadcasting: 3\nI0426 22:07:34.656399 3860 log.go:172] (0xc00099e0b0) Reply frame received for 3\nI0426 22:07:34.656442 3860 log.go:172] (0xc00099e0b0) (0xc0004035e0) Create stream\nI0426 22:07:34.656454 3860 log.go:172] (0xc00099e0b0) (0xc0004035e0) Stream added, broadcasting: 5\nI0426 22:07:34.657702 3860 log.go:172] (0xc00099e0b0) Reply frame received for 5\nI0426 22:07:34.720844 3860 log.go:172] (0xc00099e0b0) Data frame received for 5\nI0426 22:07:34.721052 3860 log.go:172] (0xc0004035e0) (5) Data frame handling\nI0426 22:07:34.721101 3860 log.go:172] (0xc0004035e0) (5) Data frame sent\nI0426 22:07:34.721275 3860 log.go:172] (0xc00099e0b0) Data frame received for 5\nI0426 22:07:34.721295 3860 log.go:172] (0xc0004035e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30030\nConnection to 172.17.0.10 30030 port [tcp/30030] succeeded!\nI0426 22:07:34.721348 3860 log.go:172] (0xc0004035e0) (5) Data frame sent\nI0426 22:07:34.721667 3860 log.go:172] (0xc00099e0b0) Data frame received for 5\nI0426 22:07:34.721702 3860 log.go:172] (0xc0004035e0) (5) Data frame handling\nI0426 22:07:34.721744 3860 log.go:172] (0xc00099e0b0) Data frame received for 3\nI0426 22:07:34.721760 3860 log.go:172] (0xc00067e820) (3) Data frame handling\nI0426 22:07:34.723069 3860 log.go:172] (0xc00099e0b0) Data frame received for 1\nI0426 22:07:34.723095 3860 log.go:172] (0xc000713d60) (1) Data frame handling\nI0426 22:07:34.723109 3860 log.go:172] (0xc000713d60) (1) Data frame sent\nI0426 22:07:34.723118 3860 log.go:172] (0xc00099e0b0) (0xc000713d60) Stream removed, broadcasting: 1\nI0426 22:07:34.723135 3860 log.go:172] (0xc00099e0b0) Go away received\nI0426 22:07:34.723521 3860 log.go:172] (0xc00099e0b0) (0xc000713d60) Stream removed, broadcasting: 1\nI0426 22:07:34.723534 3860 log.go:172] (0xc00099e0b0) (0xc00067e820) Stream removed, broadcasting: 3\nI0426 22:07:34.723541 3860 log.go:172] (0xc00099e0b0) (0xc0004035e0) Stream removed, broadcasting: 5\n" Apr 26 22:07:34.728: INFO: stdout: "" Apr 26 22:07:34.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8656 execpodk6t98 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30030' Apr 26 22:07:34.909: INFO: stderr: "I0426 22:07:34.847344 3880 log.go:172] (0xc0007ae000) (0xc000726140) Create stream\nI0426 22:07:34.847415 3880 log.go:172] (0xc0007ae000) (0xc000726140) Stream added, broadcasting: 1\nI0426 22:07:34.850759 3880 log.go:172] (0xc0007ae000) Reply frame received for 1\nI0426 22:07:34.850816 3880 log.go:172] (0xc0007ae000) (0xc0006a3ae0) Create stream\nI0426 22:07:34.850828 3880 log.go:172] (0xc0007ae000) (0xc0006a3ae0) Stream added, broadcasting: 3\nI0426 22:07:34.851804 3880 log.go:172] (0xc0007ae000) Reply frame received for 3\nI0426 22:07:34.851864 3880 log.go:172] (0xc0007ae000) (0xc00089e000) Create stream\nI0426 22:07:34.851896 3880 log.go:172] (0xc0007ae000) (0xc00089e000) Stream added, broadcasting: 5\nI0426 22:07:34.852774 3880 log.go:172] (0xc0007ae000) Reply frame received for 5\nI0426 22:07:34.904616 3880 log.go:172] (0xc0007ae000) Data frame received for 5\nI0426 22:07:34.904643 3880 log.go:172] (0xc00089e000) (5) Data frame handling\nI0426 22:07:34.904651 3880 log.go:172] (0xc00089e000) (5) Data frame sent\nI0426 22:07:34.904657 3880 log.go:172] (0xc0007ae000) Data frame received for 5\nI0426 22:07:34.904662 3880 log.go:172] (0xc00089e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30030\nConnection to 172.17.0.8 30030 port [tcp/30030] succeeded!\nI0426 22:07:34.904671 3880 log.go:172] (0xc0007ae000) Data frame received for 3\nI0426 22:07:34.904729 3880 log.go:172] (0xc0006a3ae0) (3) Data frame handling\nI0426 22:07:34.905816 3880 log.go:172] (0xc0007ae000) Data frame received for 1\nI0426 22:07:34.905836 3880 log.go:172] (0xc000726140) (1) Data frame handling\nI0426 22:07:34.905847 3880 log.go:172] (0xc000726140) (1) Data frame sent\nI0426 22:07:34.905859 3880 log.go:172] (0xc0007ae000) (0xc000726140) Stream removed, broadcasting: 1\nI0426 22:07:34.905875 3880 log.go:172] (0xc0007ae000) Go away received\nI0426 22:07:34.906184 3880 log.go:172] (0xc0007ae000) (0xc000726140) Stream removed, broadcasting: 1\nI0426 22:07:34.906199 3880 log.go:172] (0xc0007ae000) (0xc0006a3ae0) Stream removed, broadcasting: 3\nI0426 22:07:34.906208 3880 log.go:172] (0xc0007ae000) (0xc00089e000) Stream removed, broadcasting: 5\n" Apr 26 22:07:34.909: INFO: stdout: "" Apr 26 22:07:34.909: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:07:34.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8656" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.209 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":212,"skipped":3552,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:07:34.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:07:51.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-127" for this suite. • [SLOW TEST:16.484 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":213,"skipped":3564,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:07:51.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 22:07:52.024: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 22:07:54.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535672, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535672, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535672, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535671, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 22:07:57.095: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:07:57.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6847" for this suite. STEP: Destroying namespace "webhook-6847-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.858 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":214,"skipped":3565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:07:57.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6099 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6099 STEP: Creating statefulset with conflicting port in namespace statefulset-6099 STEP: Waiting until pod test-pod will start running in namespace statefulset-6099 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6099 Apr 26 22:08:01.463: INFO: Observed stateful pod in namespace: statefulset-6099, name: ss-0, uid: c18299a1-110c-4525-9268-754e3a94db36, status phase: Failed. Waiting for statefulset controller to delete. Apr 26 22:08:01.468: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6099 STEP: Removing pod with conflicting port in namespace statefulset-6099 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6099 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 26 22:08:07.546: INFO: Deleting all statefulset in ns statefulset-6099 Apr 26 22:08:07.548: INFO: Scaling statefulset ss to 0 Apr 26 22:08:27.567: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 22:08:27.570: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:08:27.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6099" for this suite. • [SLOW TEST:30.271 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":215,"skipped":3618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:08:27.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:08:58.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8443" for this suite. • [SLOW TEST:30.797 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3648,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:08:58.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-1473b850-6876-40dd-967c-95a4267710b9 STEP: Creating a pod to test consume secrets Apr 26 22:08:58.470: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd9bc6f8-213e-4877-ab56-0f5d3e3966d9" in namespace "projected-4094" to be "success or failure" Apr 26 22:08:58.483: INFO: Pod "pod-projected-secrets-cd9bc6f8-213e-4877-ab56-0f5d3e3966d9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.212074ms Apr 26 22:09:00.487: INFO: Pod "pod-projected-secrets-cd9bc6f8-213e-4877-ab56-0f5d3e3966d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017497421s Apr 26 22:09:02.492: INFO: Pod "pod-projected-secrets-cd9bc6f8-213e-4877-ab56-0f5d3e3966d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021937477s STEP: Saw pod success Apr 26 22:09:02.492: INFO: Pod "pod-projected-secrets-cd9bc6f8-213e-4877-ab56-0f5d3e3966d9" satisfied condition "success or failure" Apr 26 22:09:02.495: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-cd9bc6f8-213e-4877-ab56-0f5d3e3966d9 container secret-volume-test: STEP: delete the pod Apr 26 22:09:02.542: INFO: Waiting for pod pod-projected-secrets-cd9bc6f8-213e-4877-ab56-0f5d3e3966d9 to disappear Apr 26 22:09:02.558: INFO: Pod pod-projected-secrets-cd9bc6f8-213e-4877-ab56-0f5d3e3966d9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:09:02.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4094" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:09:02.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0426 22:09:42.843213 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 22:09:42.843: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:09:42.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5466" for this suite. • [SLOW TEST:40.283 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":218,"skipped":3694,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:09:42.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-5f34cdb2-0d49-4374-8eb7-5e2726c0b66d STEP: Creating a pod to test consume configMaps Apr 26 22:09:42.965: INFO: Waiting up to 5m0s for pod "pod-configmaps-724c76b6-ce54-476f-a8b0-c9fa3f90d369" in namespace "configmap-9573" to be "success or failure" Apr 26 22:09:42.987: INFO: Pod "pod-configmaps-724c76b6-ce54-476f-a8b0-c9fa3f90d369": Phase="Pending", Reason="", readiness=false. Elapsed: 21.790821ms Apr 26 22:09:44.991: INFO: Pod "pod-configmaps-724c76b6-ce54-476f-a8b0-c9fa3f90d369": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025984623s Apr 26 22:09:46.996: INFO: Pod "pod-configmaps-724c76b6-ce54-476f-a8b0-c9fa3f90d369": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030443347s STEP: Saw pod success Apr 26 22:09:46.996: INFO: Pod "pod-configmaps-724c76b6-ce54-476f-a8b0-c9fa3f90d369" satisfied condition "success or failure" Apr 26 22:09:47.000: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-724c76b6-ce54-476f-a8b0-c9fa3f90d369 container configmap-volume-test: STEP: delete the pod Apr 26 22:09:47.086: INFO: Waiting for pod pod-configmaps-724c76b6-ce54-476f-a8b0-c9fa3f90d369 to disappear Apr 26 22:09:47.095: INFO: Pod pod-configmaps-724c76b6-ce54-476f-a8b0-c9fa3f90d369 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:09:47.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9573" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:09:47.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3462.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3462.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3462.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3462.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3462.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3462.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 22:09:57.255: INFO: DNS probes using dns-3462/dns-test-1d6794e8-6be5-4c82-9d61-00ffbbdb47a5 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:09:57.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3462" for this suite. • [SLOW TEST:10.277 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":220,"skipped":3741,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:09:57.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 22:09:57.956: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b06a10f-0137-4e13-801d-ab890af5c1b3" in namespace "projected-7109" to be "success or failure" Apr 26 22:09:57.961: INFO: Pod "downwardapi-volume-7b06a10f-0137-4e13-801d-ab890af5c1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355056ms Apr 26 22:10:00.129: INFO: Pod "downwardapi-volume-7b06a10f-0137-4e13-801d-ab890af5c1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173028614s Apr 26 22:10:02.133: INFO: Pod "downwardapi-volume-7b06a10f-0137-4e13-801d-ab890af5c1b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177005345s STEP: Saw pod success Apr 26 22:10:02.133: INFO: Pod "downwardapi-volume-7b06a10f-0137-4e13-801d-ab890af5c1b3" satisfied condition "success or failure" Apr 26 22:10:02.136: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7b06a10f-0137-4e13-801d-ab890af5c1b3 container client-container: STEP: delete the pod Apr 26 22:10:02.151: INFO: Waiting for pod downwardapi-volume-7b06a10f-0137-4e13-801d-ab890af5c1b3 to disappear Apr 26 22:10:02.155: INFO: Pod downwardapi-volume-7b06a10f-0137-4e13-801d-ab890af5c1b3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:10:02.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7109" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:10:02.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 22:10:02.659: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 22:10:04.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535802, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535802, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535802, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535802, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 22:10:07.745: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:10:07.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3521" for this suite. STEP: Destroying namespace "webhook-3521-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.959 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":222,"skipped":3794,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:10:08.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 26 22:10:16.268: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 26 22:10:16.286: INFO: Pod pod-with-prestop-exec-hook still exists Apr 26 22:10:18.286: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 26 22:10:18.292: INFO: Pod pod-with-prestop-exec-hook still exists Apr 26 22:10:20.286: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 26 22:10:20.301: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:10:20.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9349" for this suite. • [SLOW TEST:12.203 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3798,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:10:20.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-e754538e-324c-4130-bc00-46dfe12c40e4 STEP: Creating a pod to test consume configMaps Apr 26 22:10:20.413: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f1d73c2f-48b3-4855-984d-97a8402e97e7" in namespace "projected-3297" to be "success or failure" Apr 26 22:10:20.417: INFO: Pod "pod-projected-configmaps-f1d73c2f-48b3-4855-984d-97a8402e97e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.712697ms Apr 26 22:10:22.420: INFO: Pod "pod-projected-configmaps-f1d73c2f-48b3-4855-984d-97a8402e97e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007148999s Apr 26 22:10:24.445: INFO: Pod "pod-projected-configmaps-f1d73c2f-48b3-4855-984d-97a8402e97e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031794011s STEP: Saw pod success Apr 26 22:10:24.445: INFO: Pod "pod-projected-configmaps-f1d73c2f-48b3-4855-984d-97a8402e97e7" satisfied condition "success or failure" Apr 26 22:10:24.448: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f1d73c2f-48b3-4855-984d-97a8402e97e7 container projected-configmap-volume-test: STEP: delete the pod Apr 26 22:10:24.500: INFO: Waiting for pod pod-projected-configmaps-f1d73c2f-48b3-4855-984d-97a8402e97e7 to disappear Apr 26 22:10:24.513: INFO: Pod pod-projected-configmaps-f1d73c2f-48b3-4855-984d-97a8402e97e7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:10:24.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3297" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3805,"failed":0} ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:10:24.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:10:24.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7657" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":225,"skipped":3805,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:10:24.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 22:10:25.361: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 22:10:27.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535825, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535825, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535825, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535825, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 22:10:30.433: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:10:31.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9828" for this suite. STEP: Destroying namespace "webhook-9828-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.674 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":226,"skipped":3808,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:10:31.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:10:31.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1358' Apr 26 22:10:32.164: INFO: stderr: "" Apr 26 22:10:32.164: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 26 22:10:32.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1358' Apr 26 22:10:32.603: INFO: stderr: "" Apr 26 22:10:32.603: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 26 22:10:33.631: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:10:33.631: INFO: Found 0 / 1 Apr 26 22:10:34.608: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:10:34.608: INFO: Found 0 / 1 Apr 26 22:10:35.608: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:10:35.608: INFO: Found 1 / 1 Apr 26 22:10:35.608: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 26 22:10:35.611: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:10:35.611: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 26 22:10:35.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-bb9qz --namespace=kubectl-1358' Apr 26 22:10:35.729: INFO: stderr: "" Apr 26 22:10:35.729: INFO: stdout: "Name: agnhost-master-bb9qz\nNamespace: kubectl-1358\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Sun, 26 Apr 2020 22:10:32 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.215\nIPs:\n IP: 10.244.2.215\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://2cd9c394613659a29c80a77b871330388e6e82a49d71dc34cea014ea756ed346\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 26 Apr 2020 22:10:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pzmsf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pzmsf:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pzmsf\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-1358/agnhost-master-bb9qz to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Apr 26 22:10:35.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1358' Apr 26 22:10:35.846: INFO: stderr: "" Apr 26 22:10:35.846: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1358\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-bb9qz\n" Apr 26 22:10:35.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1358' Apr 26 22:10:35.946: INFO: stderr: "" Apr 26 22:10:35.946: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1358\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.98.40.107\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.215:6379\nSession Affinity: None\nEvents: \n" Apr 26 22:10:35.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Apr 26 22:10:36.075: INFO: stderr: "" Apr 26 22:10:36.075: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sun, 26 Apr 2020 22:10:33 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 26 Apr 2020 22:08:29 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 26 Apr 2020 22:08:29 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 26 Apr 2020 22:08:29 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 26 Apr 2020 22:08:29 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 42d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 42d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 42d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 42d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 42d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 42d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 26 22:10:36.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1358' Apr 26 22:10:36.198: INFO: stderr: "" Apr 26 22:10:36.198: INFO: stdout: "Name: kubectl-1358\nLabels: e2e-framework=kubectl\n e2e-run=5573ce9e-d66e-4c38-bb9a-e60f768b3ded\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:10:36.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1358" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":227,"skipped":3816,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:10:36.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 22:10:37.283: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 22:10:39.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535837, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535837, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535837, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535837, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:10:41.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535837, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535837, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535837, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723535837, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 22:10:44.392: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 26 22:10:44.412: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:10:44.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6543" for this suite. STEP: Destroying namespace "webhook-6543-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.415 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":228,"skipped":3837,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:10:44.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 26 22:10:44.770: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:10:59.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-418" for this suite. • [SLOW TEST:14.612 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3852,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:10:59.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0b71b228-2d32-4cd3-9e6d-b4ec0e468bbc STEP: Creating a pod to test consume configMaps Apr 26 22:10:59.330: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-52eab752-15b2-4428-a88f-ad7dc5bb4a3d" in namespace "projected-7140" to be "success or failure" Apr 26 22:10:59.353: INFO: Pod "pod-projected-configmaps-52eab752-15b2-4428-a88f-ad7dc5bb4a3d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.33003ms Apr 26 22:11:01.374: INFO: Pod "pod-projected-configmaps-52eab752-15b2-4428-a88f-ad7dc5bb4a3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043136454s Apr 26 22:11:03.392: INFO: Pod "pod-projected-configmaps-52eab752-15b2-4428-a88f-ad7dc5bb4a3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061127019s STEP: Saw pod success Apr 26 22:11:03.392: INFO: Pod "pod-projected-configmaps-52eab752-15b2-4428-a88f-ad7dc5bb4a3d" satisfied condition "success or failure" Apr 26 22:11:03.395: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-52eab752-15b2-4428-a88f-ad7dc5bb4a3d container projected-configmap-volume-test: STEP: delete the pod Apr 26 22:11:03.414: INFO: Waiting for pod pod-projected-configmaps-52eab752-15b2-4428-a88f-ad7dc5bb4a3d to disappear Apr 26 22:11:03.418: INFO: Pod pod-projected-configmaps-52eab752-15b2-4428-a88f-ad7dc5bb4a3d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:11:03.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7140" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3856,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:11:03.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Apr 26 22:11:03.532: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix451896373/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:11:03.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-341" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":231,"skipped":3857,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:11:03.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:11:10.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1377" for this suite. • [SLOW TEST:7.080 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":232,"skipped":3860,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:11:10.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 26 22:11:10.740: INFO: namespace kubectl-7693 Apr 26 22:11:10.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7693' Apr 26 22:11:10.971: INFO: stderr: "" Apr 26 22:11:10.971: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 26 22:11:11.976: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:11:11.976: INFO: Found 0 / 1 Apr 26 22:11:12.976: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:11:12.976: INFO: Found 0 / 1 Apr 26 22:11:13.976: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:11:13.976: INFO: Found 1 / 1 Apr 26 22:11:13.976: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 26 22:11:13.979: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:11:13.979: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 26 22:11:13.979: INFO: wait on agnhost-master startup in kubectl-7693 Apr 26 22:11:13.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-dj7b2 agnhost-master --namespace=kubectl-7693' Apr 26 22:11:14.102: INFO: stderr: "" Apr 26 22:11:14.102: INFO: stdout: "Paused\n" STEP: exposing RC Apr 26 22:11:14.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7693' Apr 26 22:11:14.305: INFO: stderr: "" Apr 26 22:11:14.305: INFO: stdout: "service/rm2 exposed\n" Apr 26 22:11:14.311: INFO: Service rm2 in namespace kubectl-7693 found. STEP: exposing service Apr 26 22:11:16.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7693' Apr 26 22:11:16.445: INFO: stderr: "" Apr 26 22:11:16.445: INFO: stdout: "service/rm3 exposed\n" Apr 26 22:11:16.455: INFO: Service rm3 in namespace kubectl-7693 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:11:18.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7693" for this suite. • [SLOW TEST:7.777 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":233,"skipped":3880,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:11:18.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 26 22:11:18.566: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5475 /api/v1/namespaces/watch-5475/configmaps/e2e-watch-test-resource-version 6f5b04c3-8b45-415d-b1a1-d5a0f6ad232e 11297339 0 2020-04-26 22:11:18 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 26 22:11:18.566: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5475 /api/v1/namespaces/watch-5475/configmaps/e2e-watch-test-resource-version 6f5b04c3-8b45-415d-b1a1-d5a0f6ad232e 11297340 0 2020-04-26 22:11:18 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:11:18.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5475" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":234,"skipped":3882,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:11:18.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 26 22:11:18.667: INFO: Waiting up to 5m0s for pod "downward-api-baf5bd24-4497-45dd-a687-517fcec44c3c" in namespace "downward-api-5816" to be "success or failure" Apr 26 22:11:18.691: INFO: Pod "downward-api-baf5bd24-4497-45dd-a687-517fcec44c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.34998ms Apr 26 22:11:20.695: INFO: Pod "downward-api-baf5bd24-4497-45dd-a687-517fcec44c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027997587s Apr 26 22:11:22.699: INFO: Pod "downward-api-baf5bd24-4497-45dd-a687-517fcec44c3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032405813s STEP: Saw pod success Apr 26 22:11:22.699: INFO: Pod "downward-api-baf5bd24-4497-45dd-a687-517fcec44c3c" satisfied condition "success or failure" Apr 26 22:11:22.702: INFO: Trying to get logs from node jerma-worker pod downward-api-baf5bd24-4497-45dd-a687-517fcec44c3c container dapi-container: STEP: delete the pod Apr 26 22:11:22.740: INFO: Waiting for pod downward-api-baf5bd24-4497-45dd-a687-517fcec44c3c to disappear Apr 26 22:11:22.746: INFO: Pod downward-api-baf5bd24-4497-45dd-a687-517fcec44c3c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:11:22.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5816" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3895,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:11:22.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 22:11:22.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28d1c04f-c654-445d-9664-285f6f7af3c0" in namespace "downward-api-3120" to be "success or failure" Apr 26 22:11:22.806: INFO: Pod "downwardapi-volume-28d1c04f-c654-445d-9664-285f6f7af3c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007298ms Apr 26 22:11:24.810: INFO: Pod "downwardapi-volume-28d1c04f-c654-445d-9664-285f6f7af3c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007716238s Apr 26 22:11:26.814: INFO: Pod "downwardapi-volume-28d1c04f-c654-445d-9664-285f6f7af3c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012256423s STEP: Saw pod success Apr 26 22:11:26.814: INFO: Pod "downwardapi-volume-28d1c04f-c654-445d-9664-285f6f7af3c0" satisfied condition "success or failure" Apr 26 22:11:26.817: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-28d1c04f-c654-445d-9664-285f6f7af3c0 container client-container: STEP: delete the pod Apr 26 22:11:26.838: INFO: Waiting for pod downwardapi-volume-28d1c04f-c654-445d-9664-285f6f7af3c0 to disappear Apr 26 22:11:26.842: INFO: Pod downwardapi-volume-28d1c04f-c654-445d-9664-285f6f7af3c0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:11:26.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3120" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:11:26.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 26 22:11:27.151: INFO: Waiting up to 5m0s for pod "pod-51898591-a57e-4957-a597-9d1049894e89" in namespace "emptydir-1242" to be "success or failure" Apr 26 22:11:27.160: INFO: Pod "pod-51898591-a57e-4957-a597-9d1049894e89": Phase="Pending", Reason="", readiness=false. Elapsed: 9.357112ms Apr 26 22:11:29.163: INFO: Pod "pod-51898591-a57e-4957-a597-9d1049894e89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012429563s Apr 26 22:11:31.168: INFO: Pod "pod-51898591-a57e-4957-a597-9d1049894e89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01775942s STEP: Saw pod success Apr 26 22:11:31.168: INFO: Pod "pod-51898591-a57e-4957-a597-9d1049894e89" satisfied condition "success or failure" Apr 26 22:11:31.171: INFO: Trying to get logs from node jerma-worker pod pod-51898591-a57e-4957-a597-9d1049894e89 container test-container: STEP: delete the pod Apr 26 22:11:31.232: INFO: Waiting for pod pod-51898591-a57e-4957-a597-9d1049894e89 to disappear Apr 26 22:11:31.240: INFO: Pod pod-51898591-a57e-4957-a597-9d1049894e89 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:11:31.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1242" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3930,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:11:31.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-78a09b9f-0a78-462d-afff-6a639ce3208c STEP: Creating configMap with name cm-test-opt-upd-7c5e0f2b-357d-4b21-84a2-1704356d6b43 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-78a09b9f-0a78-462d-afff-6a639ce3208c STEP: Updating configmap cm-test-opt-upd-7c5e0f2b-357d-4b21-84a2-1704356d6b43 STEP: Creating configMap with name cm-test-opt-create-9eb95d78-a912-4eea-a4cb-e377cb818184 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:12:41.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1128" for this suite. • [SLOW TEST:70.493 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3944,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:12:41.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-8b7d88c5-f7f3-4528-850b-cf78084ad27e STEP: Creating a pod to test consume configMaps Apr 26 22:12:41.843: INFO: Waiting up to 5m0s for pod "pod-configmaps-42313c26-1464-43ea-be49-d4112e52de2e" in namespace "configmap-8178" to be "success or failure" Apr 26 22:12:41.861: INFO: Pod "pod-configmaps-42313c26-1464-43ea-be49-d4112e52de2e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.761467ms Apr 26 22:12:43.865: INFO: Pod "pod-configmaps-42313c26-1464-43ea-be49-d4112e52de2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022126211s Apr 26 22:12:45.870: INFO: Pod "pod-configmaps-42313c26-1464-43ea-be49-d4112e52de2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02689177s STEP: Saw pod success Apr 26 22:12:45.870: INFO: Pod "pod-configmaps-42313c26-1464-43ea-be49-d4112e52de2e" satisfied condition "success or failure" Apr 26 22:12:45.873: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-42313c26-1464-43ea-be49-d4112e52de2e container configmap-volume-test: STEP: delete the pod Apr 26 22:12:45.894: INFO: Waiting for pod pod-configmaps-42313c26-1464-43ea-be49-d4112e52de2e to disappear Apr 26 22:12:45.898: INFO: Pod pod-configmaps-42313c26-1464-43ea-be49-d4112e52de2e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:12:45.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8178" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3952,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:12:45.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-407 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 26 22:12:46.024: INFO: Found 0 stateful pods, waiting for 3 Apr 26 22:12:56.035: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:12:56.035: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:12:56.035: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 26 22:13:06.029: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:13:06.029: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:13:06.029: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 26 22:13:06.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 22:13:06.329: INFO: stderr: "I0426 22:13:06.171250 4154 log.go:172] (0xc0003c2dc0) (0xc0005e79a0) Create stream\nI0426 22:13:06.171310 4154 log.go:172] (0xc0003c2dc0) (0xc0005e79a0) Stream added, broadcasting: 1\nI0426 22:13:06.174234 4154 log.go:172] (0xc0003c2dc0) Reply frame received for 1\nI0426 22:13:06.174292 4154 log.go:172] (0xc0003c2dc0) (0xc000a66000) Create stream\nI0426 22:13:06.174305 4154 log.go:172] (0xc0003c2dc0) (0xc000a66000) Stream added, broadcasting: 3\nI0426 22:13:06.175292 4154 log.go:172] (0xc0003c2dc0) Reply frame received for 3\nI0426 22:13:06.175344 4154 log.go:172] (0xc0003c2dc0) (0xc0005e7b80) Create stream\nI0426 22:13:06.175357 4154 log.go:172] (0xc0003c2dc0) (0xc0005e7b80) Stream added, broadcasting: 5\nI0426 22:13:06.176559 4154 log.go:172] (0xc0003c2dc0) Reply frame received for 5\nI0426 22:13:06.284409 4154 log.go:172] (0xc0003c2dc0) Data frame received for 5\nI0426 22:13:06.284454 4154 log.go:172] (0xc0005e7b80) (5) Data frame handling\nI0426 22:13:06.284495 4154 log.go:172] (0xc0005e7b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 22:13:06.319627 4154 log.go:172] (0xc0003c2dc0) Data frame received for 5\nI0426 22:13:06.319657 4154 log.go:172] (0xc0005e7b80) (5) Data frame handling\nI0426 22:13:06.319673 4154 log.go:172] (0xc0003c2dc0) Data frame received for 3\nI0426 22:13:06.319690 4154 log.go:172] (0xc000a66000) (3) Data frame handling\nI0426 22:13:06.319698 4154 log.go:172] (0xc000a66000) (3) Data frame sent\nI0426 22:13:06.319968 4154 log.go:172] (0xc0003c2dc0) Data frame received for 3\nI0426 22:13:06.320000 4154 log.go:172] (0xc000a66000) (3) Data frame handling\nI0426 22:13:06.322005 4154 log.go:172] (0xc0003c2dc0) Data frame received for 1\nI0426 22:13:06.322038 4154 log.go:172] (0xc0005e79a0) (1) Data frame handling\nI0426 22:13:06.322076 4154 log.go:172] (0xc0005e79a0) (1) Data frame sent\nI0426 22:13:06.322230 4154 log.go:172] (0xc0003c2dc0) (0xc0005e79a0) Stream removed, broadcasting: 1\nI0426 22:13:06.322308 4154 log.go:172] (0xc0003c2dc0) Go away received\nI0426 22:13:06.322773 4154 log.go:172] (0xc0003c2dc0) (0xc0005e79a0) Stream removed, broadcasting: 1\nI0426 22:13:06.322799 4154 log.go:172] (0xc0003c2dc0) (0xc000a66000) Stream removed, broadcasting: 3\nI0426 22:13:06.322811 4154 log.go:172] (0xc0003c2dc0) (0xc0005e7b80) Stream removed, broadcasting: 5\n" Apr 26 22:13:06.330: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 22:13:06.330: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 26 22:13:16.362: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 26 22:13:26.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 22:13:26.725: INFO: stderr: "I0426 22:13:26.602036 4176 log.go:172] (0xc0007a6a50) (0xc00075c000) Create stream\nI0426 22:13:26.602094 4176 log.go:172] (0xc0007a6a50) (0xc00075c000) Stream added, broadcasting: 1\nI0426 22:13:26.611673 4176 log.go:172] (0xc0007a6a50) Reply frame received for 1\nI0426 22:13:26.611722 4176 log.go:172] (0xc0007a6a50) (0xc00063d9a0) Create stream\nI0426 22:13:26.611740 4176 log.go:172] (0xc0007a6a50) (0xc00063d9a0) Stream added, broadcasting: 3\nI0426 22:13:26.615503 4176 log.go:172] (0xc0007a6a50) Reply frame received for 3\nI0426 22:13:26.615565 4176 log.go:172] (0xc0007a6a50) (0xc00063db80) Create stream\nI0426 22:13:26.616562 4176 log.go:172] (0xc0007a6a50) (0xc00063db80) Stream added, broadcasting: 5\nI0426 22:13:26.617563 4176 log.go:172] (0xc0007a6a50) Reply frame received for 5\nI0426 22:13:26.717542 4176 log.go:172] (0xc0007a6a50) Data frame received for 5\nI0426 22:13:26.717588 4176 log.go:172] (0xc00063db80) (5) Data frame handling\nI0426 22:13:26.717602 4176 log.go:172] (0xc00063db80) (5) Data frame sent\nI0426 22:13:26.717612 4176 log.go:172] (0xc0007a6a50) Data frame received for 5\nI0426 22:13:26.717620 4176 log.go:172] (0xc00063db80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0426 22:13:26.717642 4176 log.go:172] (0xc0007a6a50) Data frame received for 3\nI0426 22:13:26.717652 4176 log.go:172] (0xc00063d9a0) (3) Data frame handling\nI0426 22:13:26.717661 4176 log.go:172] (0xc00063d9a0) (3) Data frame sent\nI0426 22:13:26.717670 4176 log.go:172] (0xc0007a6a50) Data frame received for 3\nI0426 22:13:26.717682 4176 log.go:172] (0xc00063d9a0) (3) Data frame handling\nI0426 22:13:26.719290 4176 log.go:172] (0xc0007a6a50) Data frame received for 1\nI0426 22:13:26.719322 4176 log.go:172] (0xc00075c000) (1) Data frame handling\nI0426 22:13:26.719356 4176 log.go:172] (0xc00075c000) (1) Data frame sent\nI0426 22:13:26.719382 4176 log.go:172] (0xc0007a6a50) (0xc00075c000) Stream removed, broadcasting: 1\nI0426 22:13:26.719749 4176 log.go:172] (0xc0007a6a50) (0xc00075c000) Stream removed, broadcasting: 1\nI0426 22:13:26.719780 4176 log.go:172] (0xc0007a6a50) (0xc00063d9a0) Stream removed, broadcasting: 3\nI0426 22:13:26.719935 4176 log.go:172] (0xc0007a6a50) (0xc00063db80) Stream removed, broadcasting: 5\n" Apr 26 22:13:26.725: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 22:13:26.725: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 22:13:36.746: INFO: Waiting for StatefulSet statefulset-407/ss2 to complete update Apr 26 22:13:36.746: INFO: Waiting for Pod statefulset-407/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 26 22:13:36.746: INFO: Waiting for Pod statefulset-407/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 26 22:13:36.746: INFO: Waiting for Pod statefulset-407/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 26 22:13:46.754: INFO: Waiting for StatefulSet statefulset-407/ss2 to complete update Apr 26 22:13:46.754: INFO: Waiting for Pod statefulset-407/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 26 22:13:56.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 26 22:13:57.037: INFO: stderr: "I0426 22:13:56.897060 4198 log.go:172] (0xc0009a3290) (0xc0008b2500) Create stream\nI0426 22:13:56.897230 4198 log.go:172] (0xc0009a3290) (0xc0008b2500) Stream added, broadcasting: 1\nI0426 22:13:56.901924 4198 log.go:172] (0xc0009a3290) Reply frame received for 1\nI0426 22:13:56.901975 4198 log.go:172] (0xc0009a3290) (0xc0007edae0) Create stream\nI0426 22:13:56.901996 4198 log.go:172] (0xc0009a3290) (0xc0007edae0) Stream added, broadcasting: 3\nI0426 22:13:56.903041 4198 log.go:172] (0xc0009a3290) Reply frame received for 3\nI0426 22:13:56.903095 4198 log.go:172] (0xc0009a3290) (0xc00067e6e0) Create stream\nI0426 22:13:56.903109 4198 log.go:172] (0xc0009a3290) (0xc00067e6e0) Stream added, broadcasting: 5\nI0426 22:13:56.904196 4198 log.go:172] (0xc0009a3290) Reply frame received for 5\nI0426 22:13:56.998987 4198 log.go:172] (0xc0009a3290) Data frame received for 5\nI0426 22:13:56.999012 4198 log.go:172] (0xc00067e6e0) (5) Data frame handling\nI0426 22:13:56.999026 4198 log.go:172] (0xc00067e6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0426 22:13:57.029493 4198 log.go:172] (0xc0009a3290) Data frame received for 3\nI0426 22:13:57.029528 4198 log.go:172] (0xc0007edae0) (3) Data frame handling\nI0426 22:13:57.029539 4198 log.go:172] (0xc0007edae0) (3) Data frame sent\nI0426 22:13:57.029545 4198 log.go:172] (0xc0009a3290) Data frame received for 3\nI0426 22:13:57.029550 4198 log.go:172] (0xc0007edae0) (3) Data frame handling\nI0426 22:13:57.029581 4198 log.go:172] (0xc0009a3290) Data frame received for 5\nI0426 22:13:57.029592 4198 log.go:172] (0xc00067e6e0) (5) Data frame handling\nI0426 22:13:57.031452 4198 log.go:172] (0xc0009a3290) Data frame received for 1\nI0426 22:13:57.031477 4198 log.go:172] (0xc0008b2500) (1) Data frame handling\nI0426 22:13:57.031512 4198 log.go:172] (0xc0008b2500) (1) Data frame sent\nI0426 22:13:57.031529 4198 log.go:172] (0xc0009a3290) (0xc0008b2500) Stream removed, broadcasting: 1\nI0426 22:13:57.031657 4198 log.go:172] (0xc0009a3290) Go away received\nI0426 22:13:57.031911 4198 log.go:172] (0xc0009a3290) (0xc0008b2500) Stream removed, broadcasting: 1\nI0426 22:13:57.031936 4198 log.go:172] (0xc0009a3290) (0xc0007edae0) Stream removed, broadcasting: 3\nI0426 22:13:57.031949 4198 log.go:172] (0xc0009a3290) (0xc00067e6e0) Stream removed, broadcasting: 5\n" Apr 26 22:13:57.037: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 26 22:13:57.037: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 26 22:14:07.067: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 26 22:14:17.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 26 22:14:17.324: INFO: stderr: "I0426 22:14:17.234395 4217 log.go:172] (0xc0006bab00) (0xc000624320) Create stream\nI0426 22:14:17.234447 4217 log.go:172] (0xc0006bab00) (0xc000624320) Stream added, broadcasting: 1\nI0426 22:14:17.236697 4217 log.go:172] (0xc0006bab00) Reply frame received for 1\nI0426 22:14:17.236761 4217 log.go:172] (0xc0006bab00) (0xc00078f360) Create stream\nI0426 22:14:17.236783 4217 log.go:172] (0xc0006bab00) (0xc00078f360) Stream added, broadcasting: 3\nI0426 22:14:17.237865 4217 log.go:172] (0xc0006bab00) Reply frame received for 3\nI0426 22:14:17.237908 4217 log.go:172] (0xc0006bab00) (0xc0005905a0) Create stream\nI0426 22:14:17.237925 4217 log.go:172] (0xc0006bab00) (0xc0005905a0) Stream added, broadcasting: 5\nI0426 22:14:17.238919 4217 log.go:172] (0xc0006bab00) Reply frame received for 5\nI0426 22:14:17.319044 4217 log.go:172] (0xc0006bab00) Data frame received for 5\nI0426 22:14:17.319089 4217 log.go:172] (0xc0005905a0) (5) Data frame handling\nI0426 22:14:17.319137 4217 log.go:172] (0xc0005905a0) (5) Data frame sent\nI0426 22:14:17.319160 4217 log.go:172] (0xc0006bab00) Data frame received for 5\nI0426 22:14:17.319170 4217 log.go:172] (0xc0005905a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0426 22:14:17.319197 4217 log.go:172] (0xc0006bab00) Data frame received for 3\nI0426 22:14:17.319208 4217 log.go:172] (0xc00078f360) (3) Data frame handling\nI0426 22:14:17.319221 4217 log.go:172] (0xc00078f360) (3) Data frame sent\nI0426 22:14:17.319232 4217 log.go:172] (0xc0006bab00) Data frame received for 3\nI0426 22:14:17.319245 4217 log.go:172] (0xc00078f360) (3) Data frame handling\nI0426 22:14:17.320285 4217 log.go:172] (0xc0006bab00) Data frame received for 1\nI0426 22:14:17.320304 4217 log.go:172] (0xc000624320) (1) Data frame handling\nI0426 22:14:17.320318 4217 log.go:172] (0xc000624320) (1) Data frame sent\nI0426 22:14:17.320326 4217 log.go:172] (0xc0006bab00) (0xc000624320) Stream removed, broadcasting: 1\nI0426 22:14:17.320533 4217 log.go:172] (0xc0006bab00) Go away received\nI0426 22:14:17.320614 4217 log.go:172] (0xc0006bab00) (0xc000624320) Stream removed, broadcasting: 1\nI0426 22:14:17.320632 4217 log.go:172] (0xc0006bab00) (0xc00078f360) Stream removed, broadcasting: 3\nI0426 22:14:17.320639 4217 log.go:172] (0xc0006bab00) (0xc0005905a0) Stream removed, broadcasting: 5\n" Apr 26 22:14:17.324: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 26 22:14:17.324: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 26 22:14:27.425: INFO: Waiting for StatefulSet statefulset-407/ss2 to complete update Apr 26 22:14:27.425: INFO: Waiting for Pod statefulset-407/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 26 22:14:27.425: INFO: Waiting for Pod statefulset-407/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 26 22:14:27.425: INFO: Waiting for Pod statefulset-407/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 26 22:14:37.447: INFO: Waiting for StatefulSet statefulset-407/ss2 to complete update Apr 26 22:14:37.447: INFO: Waiting for Pod statefulset-407/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 26 22:14:37.447: INFO: Waiting for Pod statefulset-407/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 26 22:14:47.470: INFO: Waiting for StatefulSet statefulset-407/ss2 to complete update Apr 26 22:14:47.470: INFO: Waiting for Pod statefulset-407/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 26 22:14:47.470: INFO: Waiting for Pod statefulset-407/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 26 22:14:57.432: INFO: Waiting for StatefulSet statefulset-407/ss2 to complete update Apr 26 22:14:57.432: INFO: Waiting for Pod statefulset-407/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 26 22:15:07.432: INFO: Deleting all statefulset in ns statefulset-407 Apr 26 22:15:07.435: INFO: Scaling statefulset ss2 to 0 Apr 26 22:15:37.451: INFO: Waiting for statefulset status.replicas updated to 0 Apr 26 22:15:37.453: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:15:37.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-407" for this suite. • [SLOW TEST:171.564 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":240,"skipped":3957,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:15:37.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:15:37.550: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 26 22:15:42.553: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 26 22:15:42.553: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 26 22:15:44.587: INFO: Creating deployment "test-rollover-deployment" Apr 26 22:15:44.608: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 26 22:15:46.617: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 26 22:15:46.623: INFO: Ensure that both replica sets have 1 created replica Apr 26 22:15:46.629: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 26 22:15:46.634: INFO: Updating deployment test-rollover-deployment Apr 26 22:15:46.634: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 26 22:15:48.767: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 26 22:15:48.771: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 26 22:15:48.776: INFO: all replica sets need to contain the pod-template-hash label Apr 26 22:15:48.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536147, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:15:50.783: INFO: all replica sets need to contain the pod-template-hash label Apr 26 22:15:50.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:15:52.784: INFO: all replica sets need to contain the pod-template-hash label Apr 26 22:15:52.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:15:54.785: INFO: all replica sets need to contain the pod-template-hash label Apr 26 22:15:54.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:15:56.784: INFO: all replica sets need to contain the pod-template-hash label Apr 26 22:15:56.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:15:58.784: INFO: all replica sets need to contain the pod-template-hash label Apr 26 22:15:58.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536144, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:16:00.784: INFO: Apr 26 22:16:00.784: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 26 22:16:00.791: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2544 /apis/apps/v1/namespaces/deployment-2544/deployments/test-rollover-deployment ae5781ed-fda5-417f-bb95-40a1e9486a83 11298774 2 2020-04-26 22:15:44 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045bb7b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-26 22:15:44 +0000 UTC,LastTransitionTime:2020-04-26 22:15:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-04-26 22:16:00 +0000 UTC,LastTransitionTime:2020-04-26 22:15:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 26 22:16:00.794: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-2544 /apis/apps/v1/namespaces/deployment-2544/replicasets/test-rollover-deployment-574d6dfbff acbd4ab4-f9f0-4507-bb35-583ce2f2f958 11298763 2 2020-04-26 22:15:46 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ae5781ed-fda5-417f-bb95-40a1e9486a83 0xc003bd9907 0xc003bd9908}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003bd9978 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 26 22:16:00.794: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 26 22:16:00.794: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2544 /apis/apps/v1/namespaces/deployment-2544/replicasets/test-rollover-controller 4e01add4-12e4-4b98-ae02-045ad2c72865 11298772 2 2020-04-26 22:15:37 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ae5781ed-fda5-417f-bb95-40a1e9486a83 0xc003bd981f 0xc003bd9830}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003bd9898 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 26 22:16:00.794: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-2544 /apis/apps/v1/namespaces/deployment-2544/replicasets/test-rollover-deployment-f6c94f66c 8fbdf7aa-eeb7-42c2-93d1-4bfa47205729 11298715 2 2020-04-26 22:15:44 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ae5781ed-fda5-417f-bb95-40a1e9486a83 0xc003bd99e0 0xc003bd99e1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003bd9a58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 26 22:16:00.797: INFO: Pod "test-rollover-deployment-574d6dfbff-d2twr" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-d2twr test-rollover-deployment-574d6dfbff- deployment-2544 /api/v1/namespaces/deployment-2544/pods/test-rollover-deployment-574d6dfbff-d2twr 5ab92b05-67af-4296-a25a-ad4a1b15ba03 11298731 0 2020-04-26 22:15:46 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff acbd4ab4-f9f0-4507-bb35-583ce2f2f958 0xc0030b2b57 0xc0030b2b58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlv5d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlv5d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlv5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:15:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:15:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:15:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-26 22:15:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.65,StartTime:2020-04-26 22:15:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-26 22:15:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://df232e8a75abe98992f27e5ce3d47f7bc4db20861417cda8fb62de0794939d06,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:16:00.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2544" for this suite. • [SLOW TEST:23.334 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":241,"skipped":3963,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:16:00.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 26 22:16:01.138: INFO: Waiting up to 5m0s for pod "pod-737e2e92-6f93-4aef-b390-595f53461acf" in namespace "emptydir-2148" to be "success or failure" Apr 26 22:16:01.155: INFO: Pod "pod-737e2e92-6f93-4aef-b390-595f53461acf": Phase="Pending", Reason="", readiness=false. Elapsed: 17.206718ms Apr 26 22:16:03.159: INFO: Pod "pod-737e2e92-6f93-4aef-b390-595f53461acf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021510136s Apr 26 22:16:05.164: INFO: Pod "pod-737e2e92-6f93-4aef-b390-595f53461acf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025947332s STEP: Saw pod success Apr 26 22:16:05.164: INFO: Pod "pod-737e2e92-6f93-4aef-b390-595f53461acf" satisfied condition "success or failure" Apr 26 22:16:05.167: INFO: Trying to get logs from node jerma-worker2 pod pod-737e2e92-6f93-4aef-b390-595f53461acf container test-container: STEP: delete the pod Apr 26 22:16:05.221: INFO: Waiting for pod pod-737e2e92-6f93-4aef-b390-595f53461acf to disappear Apr 26 22:16:05.239: INFO: Pod pod-737e2e92-6f93-4aef-b390-595f53461acf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:16:05.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2148" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3966,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:16:05.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 26 22:16:05.325: INFO: Waiting up to 5m0s for pod "pod-9633d443-a108-4065-950e-b2672d112415" in namespace "emptydir-5806" to be "success or failure" Apr 26 22:16:05.347: INFO: Pod "pod-9633d443-a108-4065-950e-b2672d112415": Phase="Pending", Reason="", readiness=false. Elapsed: 22.175558ms Apr 26 22:16:07.351: INFO: Pod "pod-9633d443-a108-4065-950e-b2672d112415": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025946365s Apr 26 22:16:09.355: INFO: Pod "pod-9633d443-a108-4065-950e-b2672d112415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030285446s STEP: Saw pod success Apr 26 22:16:09.355: INFO: Pod "pod-9633d443-a108-4065-950e-b2672d112415" satisfied condition "success or failure" Apr 26 22:16:09.358: INFO: Trying to get logs from node jerma-worker pod pod-9633d443-a108-4065-950e-b2672d112415 container test-container: STEP: delete the pod Apr 26 22:16:09.401: INFO: Waiting for pod pod-9633d443-a108-4065-950e-b2672d112415 to disappear Apr 26 22:16:09.406: INFO: Pod pod-9633d443-a108-4065-950e-b2672d112415 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:16:09.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5806" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3993,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:16:09.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 26 22:16:09.512: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:16:26.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9858" for this suite. • [SLOW TEST:16.796 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":244,"skipped":3998,"failed":0} [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:16:26.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Apr 26 22:16:26.326: INFO: Waiting up to 5m0s for pod "pod-f76b1580-ac4a-4fad-a2b7-996a1dc993e5" in namespace "emptydir-389" to be "success or failure" Apr 26 22:16:26.342: INFO: Pod "pod-f76b1580-ac4a-4fad-a2b7-996a1dc993e5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.91636ms Apr 26 22:16:28.480: INFO: Pod "pod-f76b1580-ac4a-4fad-a2b7-996a1dc993e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154587634s Apr 26 22:16:30.484: INFO: Pod "pod-f76b1580-ac4a-4fad-a2b7-996a1dc993e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158375354s STEP: Saw pod success Apr 26 22:16:30.484: INFO: Pod "pod-f76b1580-ac4a-4fad-a2b7-996a1dc993e5" satisfied condition "success or failure" Apr 26 22:16:30.487: INFO: Trying to get logs from node jerma-worker2 pod pod-f76b1580-ac4a-4fad-a2b7-996a1dc993e5 container test-container: STEP: delete the pod Apr 26 22:16:30.554: INFO: Waiting for pod pod-f76b1580-ac4a-4fad-a2b7-996a1dc993e5 to disappear Apr 26 22:16:30.569: INFO: Pod pod-f76b1580-ac4a-4fad-a2b7-996a1dc993e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:16:30.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-389" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3998,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:16:30.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 26 22:16:40.702: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1462 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:16:40.702: INFO: >>> kubeConfig: /root/.kube/config I0426 22:16:40.736879 6 log.go:172] (0xc001d46580) (0xc000cdd2c0) Create stream I0426 22:16:40.736917 6 log.go:172] (0xc001d46580) (0xc000cdd2c0) Stream added, broadcasting: 1 I0426 22:16:40.739025 6 log.go:172] (0xc001d46580) Reply frame received for 1 I0426 22:16:40.739065 6 log.go:172] (0xc001d46580) (0xc0028740a0) Create stream I0426 22:16:40.739077 6 log.go:172] (0xc001d46580) (0xc0028740a0) Stream added, broadcasting: 3 I0426 22:16:40.740116 6 log.go:172] (0xc001d46580) Reply frame received for 3 I0426 22:16:40.740178 6 log.go:172] (0xc001d46580) (0xc000cdd4a0) Create stream I0426 22:16:40.740199 6 log.go:172] (0xc001d46580) (0xc000cdd4a0) Stream added, broadcasting: 5 I0426 22:16:40.741313 6 log.go:172] (0xc001d46580) Reply frame received for 5 I0426 22:16:40.808016 6 log.go:172] (0xc001d46580) Data frame received for 5 I0426 22:16:40.808082 6 log.go:172] (0xc000cdd4a0) (5) Data frame handling I0426 22:16:40.808130 6 log.go:172] (0xc001d46580) Data frame received for 3 I0426 22:16:40.808150 6 log.go:172] (0xc0028740a0) (3) Data frame handling I0426 22:16:40.808181 6 log.go:172] (0xc0028740a0) (3) Data frame sent I0426 22:16:40.808193 6 log.go:172] (0xc001d46580) Data frame received for 3 I0426 22:16:40.808204 6 log.go:172] (0xc0028740a0) (3) Data frame handling I0426 22:16:40.809953 6 log.go:172] (0xc001d46580) Data frame received for 1 I0426 22:16:40.809982 6 log.go:172] (0xc000cdd2c0) (1) Data frame handling I0426 22:16:40.810003 6 log.go:172] (0xc000cdd2c0) (1) Data frame sent I0426 22:16:40.810021 6 log.go:172] (0xc001d46580) (0xc000cdd2c0) Stream removed, broadcasting: 1 I0426 22:16:40.810035 6 log.go:172] (0xc001d46580) Go away received I0426 22:16:40.810217 6 log.go:172] (0xc001d46580) (0xc000cdd2c0) Stream removed, broadcasting: 1 I0426 22:16:40.810248 6 log.go:172] (0xc001d46580) (0xc0028740a0) Stream removed, broadcasting: 3 I0426 22:16:40.810261 6 log.go:172] (0xc001d46580) (0xc000cdd4a0) Stream removed, broadcasting: 5 Apr 26 22:16:40.810: INFO: Exec stderr: "" Apr 26 22:16:40.810: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1462 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:16:40.810: INFO: >>> kubeConfig: /root/.kube/config I0426 22:16:40.841425 6 log.go:172] (0xc0018d24d0) (0xc002874be0) Create stream I0426 22:16:40.841474 6 log.go:172] (0xc0018d24d0) (0xc002874be0) Stream added, broadcasting: 1 I0426 22:16:40.842949 6 log.go:172] (0xc0018d24d0) Reply frame received for 1 I0426 22:16:40.842983 6 log.go:172] (0xc0018d24d0) (0xc0019f00a0) Create stream I0426 22:16:40.842995 6 log.go:172] (0xc0018d24d0) (0xc0019f00a0) Stream added, broadcasting: 3 I0426 22:16:40.843645 6 log.go:172] (0xc0018d24d0) Reply frame received for 3 I0426 22:16:40.843689 6 log.go:172] (0xc0018d24d0) (0xc002874dc0) Create stream I0426 22:16:40.843698 6 log.go:172] (0xc0018d24d0) (0xc002874dc0) Stream added, broadcasting: 5 I0426 22:16:40.844373 6 log.go:172] (0xc0018d24d0) Reply frame received for 5 I0426 22:16:40.908937 6 log.go:172] (0xc0018d24d0) Data frame received for 5 I0426 22:16:40.908978 6 log.go:172] (0xc002874dc0) (5) Data frame handling I0426 22:16:40.909006 6 log.go:172] (0xc0018d24d0) Data frame received for 3 I0426 22:16:40.909045 6 log.go:172] (0xc0019f00a0) (3) Data frame handling I0426 22:16:40.909066 6 log.go:172] (0xc0019f00a0) (3) Data frame sent I0426 22:16:40.909080 6 log.go:172] (0xc0018d24d0) Data frame received for 3 I0426 22:16:40.909104 6 log.go:172] (0xc0019f00a0) (3) Data frame handling I0426 22:16:40.910309 6 log.go:172] (0xc0018d24d0) Data frame received for 1 I0426 22:16:40.910331 6 log.go:172] (0xc002874be0) (1) Data frame handling I0426 22:16:40.910345 6 log.go:172] (0xc002874be0) (1) Data frame sent I0426 22:16:40.910379 6 log.go:172] (0xc0018d24d0) (0xc002874be0) Stream removed, broadcasting: 1 I0426 22:16:40.910451 6 log.go:172] (0xc0018d24d0) Go away received I0426 22:16:40.910529 6 log.go:172] (0xc0018d24d0) (0xc002874be0) Stream removed, broadcasting: 1 I0426 22:16:40.910560 6 log.go:172] (0xc0018d24d0) (0xc0019f00a0) Stream removed, broadcasting: 3 I0426 22:16:40.910571 6 log.go:172] (0xc0018d24d0) (0xc002874dc0) Stream removed, broadcasting: 5 Apr 26 22:16:40.910: INFO: Exec stderr: "" Apr 26 22:16:40.910: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1462 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:16:40.910: INFO: >>> kubeConfig: /root/.kube/config I0426 22:16:40.943803 6 log.go:172] (0xc0018d2840) (0xc002874f00) Create stream I0426 22:16:40.943825 6 log.go:172] (0xc0018d2840) (0xc002874f00) Stream added, broadcasting: 1 I0426 22:16:40.945924 6 log.go:172] (0xc0018d2840) Reply frame received for 1 I0426 22:16:40.945968 6 log.go:172] (0xc0018d2840) (0xc002874fa0) Create stream I0426 22:16:40.945984 6 log.go:172] (0xc0018d2840) (0xc002874fa0) Stream added, broadcasting: 3 I0426 22:16:40.946930 6 log.go:172] (0xc0018d2840) Reply frame received for 3 I0426 22:16:40.946961 6 log.go:172] (0xc0018d2840) (0xc002875040) Create stream I0426 22:16:40.946969 6 log.go:172] (0xc0018d2840) (0xc002875040) Stream added, broadcasting: 5 I0426 22:16:40.947913 6 log.go:172] (0xc0018d2840) Reply frame received for 5 I0426 22:16:41.015108 6 log.go:172] (0xc0018d2840) Data frame received for 5 I0426 22:16:41.015169 6 log.go:172] (0xc002875040) (5) Data frame handling I0426 22:16:41.015201 6 log.go:172] (0xc0018d2840) Data frame received for 3 I0426 22:16:41.015225 6 log.go:172] (0xc002874fa0) (3) Data frame handling I0426 22:16:41.015272 6 log.go:172] (0xc002874fa0) (3) Data frame sent I0426 22:16:41.015294 6 log.go:172] (0xc0018d2840) Data frame received for 3 I0426 22:16:41.015311 6 log.go:172] (0xc002874fa0) (3) Data frame handling I0426 22:16:41.016409 6 log.go:172] (0xc0018d2840) Data frame received for 1 I0426 22:16:41.016437 6 log.go:172] (0xc002874f00) (1) Data frame handling I0426 22:16:41.016457 6 log.go:172] (0xc002874f00) (1) Data frame sent I0426 22:16:41.016526 6 log.go:172] (0xc0018d2840) (0xc002874f00) Stream removed, broadcasting: 1 I0426 22:16:41.016577 6 log.go:172] (0xc0018d2840) Go away received I0426 22:16:41.016660 6 log.go:172] (0xc0018d2840) (0xc002874f00) Stream removed, broadcasting: 1 I0426 22:16:41.016700 6 log.go:172] (0xc0018d2840) (0xc002874fa0) Stream removed, broadcasting: 3 I0426 22:16:41.016714 6 log.go:172] (0xc0018d2840) (0xc002875040) Stream removed, broadcasting: 5 Apr 26 22:16:41.016: INFO: Exec stderr: "" Apr 26 22:16:41.016: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1462 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:16:41.016: INFO: >>> kubeConfig: /root/.kube/config I0426 22:16:41.046562 6 log.go:172] (0xc0018d2e70) (0xc002875220) Create stream I0426 22:16:41.046590 6 log.go:172] (0xc0018d2e70) (0xc002875220) Stream added, broadcasting: 1 I0426 22:16:41.048394 6 log.go:172] (0xc0018d2e70) Reply frame received for 1 I0426 22:16:41.048429 6 log.go:172] (0xc0018d2e70) (0xc0028752c0) Create stream I0426 22:16:41.048442 6 log.go:172] (0xc0018d2e70) (0xc0028752c0) Stream added, broadcasting: 3 I0426 22:16:41.049734 6 log.go:172] (0xc0018d2e70) Reply frame received for 3 I0426 22:16:41.049769 6 log.go:172] (0xc0018d2e70) (0xc001697ae0) Create stream I0426 22:16:41.049795 6 log.go:172] (0xc0018d2e70) (0xc001697ae0) Stream added, broadcasting: 5 I0426 22:16:41.050654 6 log.go:172] (0xc0018d2e70) Reply frame received for 5 I0426 22:16:41.106697 6 log.go:172] (0xc0018d2e70) Data frame received for 3 I0426 22:16:41.106731 6 log.go:172] (0xc0018d2e70) Data frame received for 5 I0426 22:16:41.106752 6 log.go:172] (0xc001697ae0) (5) Data frame handling I0426 22:16:41.106774 6 log.go:172] (0xc0028752c0) (3) Data frame handling I0426 22:16:41.106787 6 log.go:172] (0xc0028752c0) (3) Data frame sent I0426 22:16:41.106794 6 log.go:172] (0xc0018d2e70) Data frame received for 3 I0426 22:16:41.106800 6 log.go:172] (0xc0028752c0) (3) Data frame handling I0426 22:16:41.108288 6 log.go:172] (0xc0018d2e70) Data frame received for 1 I0426 22:16:41.108320 6 log.go:172] (0xc002875220) (1) Data frame handling I0426 22:16:41.108342 6 log.go:172] (0xc002875220) (1) Data frame sent I0426 22:16:41.108355 6 log.go:172] (0xc0018d2e70) (0xc002875220) Stream removed, broadcasting: 1 I0426 22:16:41.108374 6 log.go:172] (0xc0018d2e70) Go away received I0426 22:16:41.108425 6 log.go:172] (0xc0018d2e70) (0xc002875220) Stream removed, broadcasting: 1 I0426 22:16:41.108437 6 log.go:172] (0xc0018d2e70) (0xc0028752c0) Stream removed, broadcasting: 3 I0426 22:16:41.108444 6 log.go:172] (0xc0018d2e70) (0xc001697ae0) Stream removed, broadcasting: 5 Apr 26 22:16:41.108: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 26 22:16:41.108: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1462 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:16:41.108: INFO: >>> kubeConfig: /root/.kube/config I0426 22:16:41.138279 6 log.go:172] (0xc001e65ce0) (0xc0007f6000) Create stream I0426 22:16:41.138305 6 log.go:172] (0xc001e65ce0) (0xc0007f6000) Stream added, broadcasting: 1 I0426 22:16:41.140057 6 log.go:172] (0xc001e65ce0) Reply frame received for 1 I0426 22:16:41.140084 6 log.go:172] (0xc001e65ce0) (0xc001d09860) Create stream I0426 22:16:41.140090 6 log.go:172] (0xc001e65ce0) (0xc001d09860) Stream added, broadcasting: 3 I0426 22:16:41.140820 6 log.go:172] (0xc001e65ce0) Reply frame received for 3 I0426 22:16:41.140872 6 log.go:172] (0xc001e65ce0) (0xc002875360) Create stream I0426 22:16:41.140895 6 log.go:172] (0xc001e65ce0) (0xc002875360) Stream added, broadcasting: 5 I0426 22:16:41.141911 6 log.go:172] (0xc001e65ce0) Reply frame received for 5 I0426 22:16:41.219138 6 log.go:172] (0xc001e65ce0) Data frame received for 3 I0426 22:16:41.219169 6 log.go:172] (0xc001d09860) (3) Data frame handling I0426 22:16:41.219177 6 log.go:172] (0xc001d09860) (3) Data frame sent I0426 22:16:41.219182 6 log.go:172] (0xc001e65ce0) Data frame received for 3 I0426 22:16:41.219216 6 log.go:172] (0xc001e65ce0) Data frame received for 5 I0426 22:16:41.219267 6 log.go:172] (0xc002875360) (5) Data frame handling I0426 22:16:41.219299 6 log.go:172] (0xc001d09860) (3) Data frame handling I0426 22:16:41.220593 6 log.go:172] (0xc001e65ce0) Data frame received for 1 I0426 22:16:41.220605 6 log.go:172] (0xc0007f6000) (1) Data frame handling I0426 22:16:41.220615 6 log.go:172] (0xc0007f6000) (1) Data frame sent I0426 22:16:41.220622 6 log.go:172] (0xc001e65ce0) (0xc0007f6000) Stream removed, broadcasting: 1 I0426 22:16:41.220683 6 log.go:172] (0xc001e65ce0) (0xc0007f6000) Stream removed, broadcasting: 1 I0426 22:16:41.220692 6 log.go:172] (0xc001e65ce0) (0xc001d09860) Stream removed, broadcasting: 3 I0426 22:16:41.220762 6 log.go:172] (0xc001e65ce0) Go away received I0426 22:16:41.220864 6 log.go:172] (0xc001e65ce0) (0xc002875360) Stream removed, broadcasting: 5 Apr 26 22:16:41.220: INFO: Exec stderr: "" Apr 26 22:16:41.220: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1462 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:16:41.220: INFO: >>> kubeConfig: /root/.kube/config I0426 22:16:41.263651 6 log.go:172] (0xc0016c8580) (0xc0007f6780) Create stream I0426 22:16:41.263677 6 log.go:172] (0xc0016c8580) (0xc0007f6780) Stream added, broadcasting: 1 I0426 22:16:41.266163 6 log.go:172] (0xc0016c8580) Reply frame received for 1 I0426 22:16:41.266199 6 log.go:172] (0xc0016c8580) (0xc0007f6fa0) Create stream I0426 22:16:41.266209 6 log.go:172] (0xc0016c8580) (0xc0007f6fa0) Stream added, broadcasting: 3 I0426 22:16:41.267037 6 log.go:172] (0xc0016c8580) Reply frame received for 3 I0426 22:16:41.267065 6 log.go:172] (0xc0016c8580) (0xc0007f70e0) Create stream I0426 22:16:41.267081 6 log.go:172] (0xc0016c8580) (0xc0007f70e0) Stream added, broadcasting: 5 I0426 22:16:41.267888 6 log.go:172] (0xc0016c8580) Reply frame received for 5 I0426 22:16:41.326817 6 log.go:172] (0xc0016c8580) Data frame received for 3 I0426 22:16:41.326879 6 log.go:172] (0xc0016c8580) Data frame received for 5 I0426 22:16:41.326934 6 log.go:172] (0xc0007f70e0) (5) Data frame handling I0426 22:16:41.326964 6 log.go:172] (0xc0007f6fa0) (3) Data frame handling I0426 22:16:41.326983 6 log.go:172] (0xc0007f6fa0) (3) Data frame sent I0426 22:16:41.327006 6 log.go:172] (0xc0016c8580) Data frame received for 3 I0426 22:16:41.327023 6 log.go:172] (0xc0007f6fa0) (3) Data frame handling I0426 22:16:41.328763 6 log.go:172] (0xc0016c8580) Data frame received for 1 I0426 22:16:41.328800 6 log.go:172] (0xc0007f6780) (1) Data frame handling I0426 22:16:41.328827 6 log.go:172] (0xc0007f6780) (1) Data frame sent I0426 22:16:41.328841 6 log.go:172] (0xc0016c8580) (0xc0007f6780) Stream removed, broadcasting: 1 I0426 22:16:41.328859 6 log.go:172] (0xc0016c8580) Go away received I0426 22:16:41.329044 6 log.go:172] (0xc0016c8580) (0xc0007f6780) Stream removed, broadcasting: 1 I0426 22:16:41.329070 6 log.go:172] (0xc0016c8580) (0xc0007f6fa0) Stream removed, broadcasting: 3 I0426 22:16:41.329082 6 log.go:172] (0xc0016c8580) (0xc0007f70e0) Stream removed, broadcasting: 5 Apr 26 22:16:41.329: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 26 22:16:41.329: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1462 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:16:41.329: INFO: >>> kubeConfig: /root/.kube/config I0426 22:16:41.362538 6 log.go:172] (0xc0016c8bb0) (0xc0007f72c0) Create stream I0426 22:16:41.362560 6 log.go:172] (0xc0016c8bb0) (0xc0007f72c0) Stream added, broadcasting: 1 I0426 22:16:41.364432 6 log.go:172] (0xc0016c8bb0) Reply frame received for 1 I0426 22:16:41.364483 6 log.go:172] (0xc0016c8bb0) (0xc0007f74a0) Create stream I0426 22:16:41.364513 6 log.go:172] (0xc0016c8bb0) (0xc0007f74a0) Stream added, broadcasting: 3 I0426 22:16:41.365650 6 log.go:172] (0xc0016c8bb0) Reply frame received for 3 I0426 22:16:41.365680 6 log.go:172] (0xc0016c8bb0) (0xc0007f7680) Create stream I0426 22:16:41.365690 6 log.go:172] (0xc0016c8bb0) (0xc0007f7680) Stream added, broadcasting: 5 I0426 22:16:41.366694 6 log.go:172] (0xc0016c8bb0) Reply frame received for 5 I0426 22:16:41.435519 6 log.go:172] (0xc0016c8bb0) Data frame received for 5 I0426 22:16:41.435561 6 log.go:172] (0xc0007f7680) (5) Data frame handling I0426 22:16:41.435608 6 log.go:172] (0xc0016c8bb0) Data frame received for 3 I0426 22:16:41.435664 6 log.go:172] (0xc0007f74a0) (3) Data frame handling I0426 22:16:41.435702 6 log.go:172] (0xc0007f74a0) (3) Data frame sent I0426 22:16:41.435731 6 log.go:172] (0xc0016c8bb0) Data frame received for 3 I0426 22:16:41.435744 6 log.go:172] (0xc0007f74a0) (3) Data frame handling I0426 22:16:41.436903 6 log.go:172] (0xc0016c8bb0) Data frame received for 1 I0426 22:16:41.436929 6 log.go:172] (0xc0007f72c0) (1) Data frame handling I0426 22:16:41.436969 6 log.go:172] (0xc0007f72c0) (1) Data frame sent I0426 22:16:41.436985 6 log.go:172] (0xc0016c8bb0) (0xc0007f72c0) Stream removed, broadcasting: 1 I0426 22:16:41.437010 6 log.go:172] (0xc0016c8bb0) Go away received I0426 22:16:41.437093 6 log.go:172] (0xc0016c8bb0) (0xc0007f72c0) Stream removed, broadcasting: 1 I0426 22:16:41.437123 6 log.go:172] (0xc0016c8bb0) (0xc0007f74a0) Stream removed, broadcasting: 3 I0426 22:16:41.437140 6 log.go:172] (0xc0016c8bb0) (0xc0007f7680) Stream removed, broadcasting: 5 Apr 26 22:16:41.437: INFO: Exec stderr: "" Apr 26 22:16:41.437: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1462 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:16:41.437: INFO: >>> kubeConfig: /root/.kube/config I0426 22:16:41.470187 6 log.go:172] (0xc001d46bb0) (0xc000cdd7c0) Create stream I0426 22:16:41.470218 6 log.go:172] (0xc001d46bb0) (0xc000cdd7c0) Stream added, broadcasting: 1 I0426 22:16:41.471808 6 log.go:172] (0xc001d46bb0) Reply frame received for 1 I0426 22:16:41.471841 6 log.go:172] (0xc001d46bb0) (0xc000cdd860) Create stream I0426 22:16:41.471850 6 log.go:172] (0xc001d46bb0) (0xc000cdd860) Stream added, broadcasting: 3 I0426 22:16:41.472794 6 log.go:172] (0xc001d46bb0) Reply frame received for 3 I0426 22:16:41.472836 6 log.go:172] (0xc001d46bb0) (0xc001d09a40) Create stream I0426 22:16:41.472851 6 log.go:172] (0xc001d46bb0) (0xc001d09a40) Stream added, broadcasting: 5 I0426 22:16:41.473818 6 log.go:172] (0xc001d46bb0) Reply frame received for 5 I0426 22:16:41.528578 6 log.go:172] (0xc001d46bb0) Data frame received for 5 I0426 22:16:41.528615 6 log.go:172] (0xc001d09a40) (5) Data frame handling I0426 22:16:41.528641 6 log.go:172] (0xc001d46bb0) Data frame received for 3 I0426 22:16:41.528661 6 log.go:172] (0xc000cdd860) (3) Data frame handling I0426 22:16:41.528678 6 log.go:172] (0xc000cdd860) (3) Data frame sent I0426 22:16:41.528691 6 log.go:172] (0xc001d46bb0) Data frame received for 3 I0426 22:16:41.528702 6 log.go:172] (0xc000cdd860) (3) Data frame handling I0426 22:16:41.530556 6 log.go:172] (0xc001d46bb0) Data frame received for 1 I0426 22:16:41.530587 6 log.go:172] (0xc000cdd7c0) (1) Data frame handling I0426 22:16:41.530607 6 log.go:172] (0xc000cdd7c0) (1) Data frame sent I0426 22:16:41.530636 6 log.go:172] (0xc001d46bb0) (0xc000cdd7c0) Stream removed, broadcasting: 1 I0426 22:16:41.530671 6 log.go:172] (0xc001d46bb0) Go away received I0426 22:16:41.530778 6 log.go:172] (0xc001d46bb0) (0xc000cdd7c0) Stream removed, broadcasting: 1 I0426 22:16:41.530801 6 log.go:172] (0xc001d46bb0) (0xc000cdd860) Stream removed, broadcasting: 3 I0426 22:16:41.530831 6 log.go:172] (0xc001d46bb0) (0xc001d09a40) Stream removed, broadcasting: 5 Apr 26 22:16:41.530: INFO: Exec stderr: "" Apr 26 22:16:41.530: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1462 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:16:41.530: INFO: >>> kubeConfig: /root/.kube/config I0426 22:16:41.589102 6 log.go:172] (0xc001d471e0) (0xc000cddea0) Create stream I0426 22:16:41.589238 6 log.go:172] (0xc001d471e0) (0xc000cddea0) Stream added, broadcasting: 1 I0426 22:16:41.591259 6 log.go:172] (0xc001d471e0) Reply frame received for 1 I0426 22:16:41.591305 6 log.go:172] (0xc001d471e0) (0xc0019f03c0) Create stream I0426 22:16:41.591314 6 log.go:172] (0xc001d471e0) (0xc0019f03c0) Stream added, broadcasting: 3 I0426 22:16:41.592238 6 log.go:172] (0xc001d471e0) Reply frame received for 3 I0426 22:16:41.592300 6 log.go:172] (0xc001d471e0) (0xc001d82000) Create stream I0426 22:16:41.592332 6 log.go:172] (0xc001d471e0) (0xc001d82000) Stream added, broadcasting: 5 I0426 22:16:41.593476 6 log.go:172] (0xc001d471e0) Reply frame received for 5 I0426 22:16:41.650135 6 log.go:172] (0xc001d471e0) Data frame received for 3 I0426 22:16:41.650186 6 log.go:172] (0xc0019f03c0) (3) Data frame handling I0426 22:16:41.650217 6 log.go:172] (0xc0019f03c0) (3) Data frame sent I0426 22:16:41.650240 6 log.go:172] (0xc001d471e0) Data frame received for 3 I0426 22:16:41.650261 6 log.go:172] (0xc0019f03c0) (3) Data frame handling I0426 22:16:41.650358 6 log.go:172] (0xc001d471e0) Data frame received for 5 I0426 22:16:41.650378 6 log.go:172] (0xc001d82000) (5) Data frame handling I0426 22:16:41.652389 6 log.go:172] (0xc001d471e0) Data frame received for 1 I0426 22:16:41.652410 6 log.go:172] (0xc000cddea0) (1) Data frame handling I0426 22:16:41.652429 6 log.go:172] (0xc000cddea0) (1) Data frame sent I0426 22:16:41.652468 6 log.go:172] (0xc001d471e0) (0xc000cddea0) Stream removed, broadcasting: 1 I0426 22:16:41.652606 6 log.go:172] (0xc001d471e0) (0xc000cddea0) Stream removed, broadcasting: 1 I0426 22:16:41.652713 6 log.go:172] (0xc001d471e0) (0xc0019f03c0) Stream removed, broadcasting: 3 I0426 22:16:41.652749 6 log.go:172] (0xc001d471e0) (0xc001d82000) Stream removed, broadcasting: 5 Apr 26 22:16:41.652: INFO: Exec stderr: "" Apr 26 22:16:41.652: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1462 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0426 22:16:41.652852 6 log.go:172] (0xc001d471e0) Go away received Apr 26 22:16:41.652: INFO: >>> kubeConfig: /root/.kube/config I0426 22:16:41.685248 6 log.go:172] (0xc0026a8160) (0xc0012fa3c0) Create stream I0426 22:16:41.685283 6 log.go:172] (0xc0026a8160) (0xc0012fa3c0) Stream added, broadcasting: 1 I0426 22:16:41.687362 6 log.go:172] (0xc0026a8160) Reply frame received for 1 I0426 22:16:41.687402 6 log.go:172] (0xc0026a8160) (0xc0007f77c0) Create stream I0426 22:16:41.687418 6 log.go:172] (0xc0026a8160) (0xc0007f77c0) Stream added, broadcasting: 3 I0426 22:16:41.688434 6 log.go:172] (0xc0026a8160) Reply frame received for 3 I0426 22:16:41.688476 6 log.go:172] (0xc0026a8160) (0xc002875400) Create stream I0426 22:16:41.688491 6 log.go:172] (0xc0026a8160) (0xc002875400) Stream added, broadcasting: 5 I0426 22:16:41.689703 6 log.go:172] (0xc0026a8160) Reply frame received for 5 I0426 22:16:41.765419 6 log.go:172] (0xc0026a8160) Data frame received for 5 I0426 22:16:41.765482 6 log.go:172] (0xc002875400) (5) Data frame handling I0426 22:16:41.765542 6 log.go:172] (0xc0026a8160) Data frame received for 3 I0426 22:16:41.765599 6 log.go:172] (0xc0007f77c0) (3) Data frame handling I0426 22:16:41.765633 6 log.go:172] (0xc0007f77c0) (3) Data frame sent I0426 22:16:41.765847 6 log.go:172] (0xc0026a8160) Data frame received for 3 I0426 22:16:41.765876 6 log.go:172] (0xc0007f77c0) (3) Data frame handling I0426 22:16:41.766944 6 log.go:172] (0xc0026a8160) Data frame received for 1 I0426 22:16:41.766986 6 log.go:172] (0xc0012fa3c0) (1) Data frame handling I0426 22:16:41.767026 6 log.go:172] (0xc0012fa3c0) (1) Data frame sent I0426 22:16:41.767097 6 log.go:172] (0xc0026a8160) (0xc0012fa3c0) Stream removed, broadcasting: 1 I0426 22:16:41.767131 6 log.go:172] (0xc0026a8160) Go away received I0426 22:16:41.767287 6 log.go:172] (0xc0026a8160) (0xc0012fa3c0) Stream removed, broadcasting: 1 I0426 22:16:41.767308 6 log.go:172] (0xc0026a8160) (0xc0007f77c0) Stream removed, broadcasting: 3 I0426 22:16:41.767322 6 log.go:172] (0xc0026a8160) (0xc002875400) Stream removed, broadcasting: 5 Apr 26 22:16:41.767: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:16:41.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1462" for this suite. • [SLOW TEST:11.198 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:16:41.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 26 22:16:41.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21d041c1-e3d9-4f5b-a369-3116acad8095" in namespace "downward-api-3362" to be "success or failure" Apr 26 22:16:41.839: INFO: Pod "downwardapi-volume-21d041c1-e3d9-4f5b-a369-3116acad8095": Phase="Pending", Reason="", readiness=false. Elapsed: 11.098185ms Apr 26 22:16:43.858: INFO: Pod "downwardapi-volume-21d041c1-e3d9-4f5b-a369-3116acad8095": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029850541s Apr 26 22:16:45.861: INFO: Pod "downwardapi-volume-21d041c1-e3d9-4f5b-a369-3116acad8095": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032861281s STEP: Saw pod success Apr 26 22:16:45.861: INFO: Pod "downwardapi-volume-21d041c1-e3d9-4f5b-a369-3116acad8095" satisfied condition "success or failure" Apr 26 22:16:45.863: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-21d041c1-e3d9-4f5b-a369-3116acad8095 container client-container: STEP: delete the pod Apr 26 22:16:45.925: INFO: Waiting for pod downwardapi-volume-21d041c1-e3d9-4f5b-a369-3116acad8095 to disappear Apr 26 22:16:45.929: INFO: Pod downwardapi-volume-21d041c1-e3d9-4f5b-a369-3116acad8095 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:16:45.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3362" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4066,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:16:45.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Apr 26 22:16:46.519: INFO: created pod pod-service-account-defaultsa Apr 26 22:16:46.519: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 26 22:16:46.528: INFO: created pod pod-service-account-mountsa Apr 26 22:16:46.528: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 26 22:16:46.558: INFO: created pod pod-service-account-nomountsa Apr 26 22:16:46.558: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 26 22:16:46.570: INFO: created pod pod-service-account-defaultsa-mountspec Apr 26 22:16:46.570: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 26 22:16:46.668: INFO: created pod pod-service-account-mountsa-mountspec Apr 26 22:16:46.668: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 26 22:16:46.684: INFO: created pod pod-service-account-nomountsa-mountspec Apr 26 22:16:46.684: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 26 22:16:46.731: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 26 22:16:46.731: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 26 22:16:46.793: INFO: created pod pod-service-account-mountsa-nomountspec Apr 26 22:16:46.793: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 26 22:16:46.816: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 26 22:16:46.817: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:16:46.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5330" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":248,"skipped":4067,"failed":0} SS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:16:46.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:16:47.114: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:16:59.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9832" for this suite. • [SLOW TEST:12.449 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4069,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:16:59.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Apr 26 22:16:59.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7156 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 26 22:17:05.277: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0426 22:17:05.186614 4239 log.go:172] (0xc0005260b0) (0xc0006d60a0) Create stream\nI0426 22:17:05.186658 4239 log.go:172] (0xc0005260b0) (0xc0006d60a0) Stream added, broadcasting: 1\nI0426 22:17:05.188816 4239 log.go:172] (0xc0005260b0) Reply frame received for 1\nI0426 22:17:05.188856 4239 log.go:172] (0xc0005260b0) (0xc0006e2000) Create stream\nI0426 22:17:05.188868 4239 log.go:172] (0xc0005260b0) (0xc0006e2000) Stream added, broadcasting: 3\nI0426 22:17:05.190042 4239 log.go:172] (0xc0005260b0) Reply frame received for 3\nI0426 22:17:05.190087 4239 log.go:172] (0xc0005260b0) (0xc0006d6140) Create stream\nI0426 22:17:05.190106 4239 log.go:172] (0xc0005260b0) (0xc0006d6140) Stream added, broadcasting: 5\nI0426 22:17:05.191300 4239 log.go:172] (0xc0005260b0) Reply frame received for 5\nI0426 22:17:05.191354 4239 log.go:172] (0xc0005260b0) (0xc000701ae0) Create stream\nI0426 22:17:05.191382 4239 log.go:172] (0xc0005260b0) (0xc000701ae0) Stream added, broadcasting: 7\nI0426 22:17:05.192391 4239 log.go:172] (0xc0005260b0) Reply frame received for 7\nI0426 22:17:05.192539 4239 log.go:172] (0xc0006e2000) (3) Writing data frame\nI0426 22:17:05.192691 4239 log.go:172] (0xc0006e2000) (3) Writing data frame\nI0426 22:17:05.194581 4239 log.go:172] (0xc0005260b0) Data frame received for 5\nI0426 22:17:05.194607 4239 log.go:172] (0xc0006d6140) (5) Data frame handling\nI0426 22:17:05.194622 4239 log.go:172] (0xc0006d6140) (5) Data frame sent\nI0426 22:17:05.194641 4239 log.go:172] (0xc0005260b0) Data frame received for 5\nI0426 22:17:05.194670 4239 log.go:172] (0xc0006d6140) (5) Data frame handling\nI0426 22:17:05.194739 4239 log.go:172] (0xc0006d6140) (5) Data frame sent\nI0426 22:17:05.234883 4239 log.go:172] (0xc0005260b0) Data frame received for 7\nI0426 22:17:05.234925 4239 log.go:172] (0xc000701ae0) (7) Data frame handling\nI0426 22:17:05.234956 4239 log.go:172] (0xc0005260b0) Data frame received for 5\nI0426 22:17:05.234972 4239 log.go:172] (0xc0006d6140) (5) Data frame handling\nI0426 22:17:05.235815 4239 log.go:172] (0xc0005260b0) Data frame received for 1\nI0426 22:17:05.235850 4239 log.go:172] (0xc0005260b0) (0xc0006e2000) Stream removed, broadcasting: 3\nI0426 22:17:05.235922 4239 log.go:172] (0xc0006d60a0) (1) Data frame handling\nI0426 22:17:05.235953 4239 log.go:172] (0xc0006d60a0) (1) Data frame sent\nI0426 22:17:05.235968 4239 log.go:172] (0xc0005260b0) (0xc0006d60a0) Stream removed, broadcasting: 1\nI0426 22:17:05.235988 4239 log.go:172] (0xc0005260b0) Go away received\nI0426 22:17:05.236435 4239 log.go:172] (0xc0005260b0) (0xc0006d60a0) Stream removed, broadcasting: 1\nI0426 22:17:05.236463 4239 log.go:172] (0xc0005260b0) (0xc0006e2000) Stream removed, broadcasting: 3\nI0426 22:17:05.236474 4239 log.go:172] (0xc0005260b0) (0xc0006d6140) Stream removed, broadcasting: 5\nI0426 22:17:05.236484 4239 log.go:172] (0xc0005260b0) (0xc000701ae0) Stream removed, broadcasting: 7\n" Apr 26 22:17:05.278: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:17:07.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7156" for this suite. • [SLOW TEST:7.894 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":250,"skipped":4089,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:17:07.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 26 22:17:07.381: INFO: Waiting up to 5m0s for pod "pod-9f903391-a341-4b10-a44d-0897b7474f8d" in namespace "emptydir-6935" to be "success or failure" Apr 26 22:17:07.384: INFO: Pod "pod-9f903391-a341-4b10-a44d-0897b7474f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890251ms Apr 26 22:17:09.415: INFO: Pod "pod-9f903391-a341-4b10-a44d-0897b7474f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033342102s Apr 26 22:17:11.419: INFO: Pod "pod-9f903391-a341-4b10-a44d-0897b7474f8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037718154s STEP: Saw pod success Apr 26 22:17:11.419: INFO: Pod "pod-9f903391-a341-4b10-a44d-0897b7474f8d" satisfied condition "success or failure" Apr 26 22:17:11.422: INFO: Trying to get logs from node jerma-worker pod pod-9f903391-a341-4b10-a44d-0897b7474f8d container test-container: STEP: delete the pod Apr 26 22:17:11.488: INFO: Waiting for pod pod-9f903391-a341-4b10-a44d-0897b7474f8d to disappear Apr 26 22:17:11.498: INFO: Pod pod-9f903391-a341-4b10-a44d-0897b7474f8d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:17:11.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6935" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4096,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:17:11.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 26 22:17:11.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6390' Apr 26 22:17:11.887: INFO: stderr: "" Apr 26 22:17:11.887: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 26 22:17:12.892: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:17:12.892: INFO: Found 0 / 1 Apr 26 22:17:13.930: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:17:13.930: INFO: Found 0 / 1 Apr 26 22:17:14.890: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:17:14.890: INFO: Found 0 / 1 Apr 26 22:17:15.892: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:17:15.892: INFO: Found 1 / 1 Apr 26 22:17:15.892: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 26 22:17:15.895: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:17:15.895: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 26 22:17:15.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-85g69 --namespace=kubectl-6390 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 26 22:17:16.093: INFO: stderr: "" Apr 26 22:17:16.093: INFO: stdout: "pod/agnhost-master-85g69 patched\n" STEP: checking annotations Apr 26 22:17:16.105: INFO: Selector matched 1 pods for map[app:agnhost] Apr 26 22:17:16.105: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:17:16.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6390" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":252,"skipped":4096,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:17:16.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4504.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4504.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 26 22:17:22.361: INFO: DNS probes using dns-4504/dns-test-583b1061-51c6-451b-97cf-6966863295da succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:17:22.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4504" for this suite. • [SLOW TEST:6.299 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":253,"skipped":4100,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:17:22.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 26 22:17:22.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9775' Apr 26 22:17:22.849: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 26 22:17:22.849: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Apr 26 22:17:22.877: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 26 22:17:22.907: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 26 22:17:22.915: INFO: scanned /root for discovery docs: Apr 26 22:17:22.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9775' Apr 26 22:17:38.876: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 26 22:17:38.876: INFO: stdout: "Created e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4\nScaling up e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Apr 26 22:17:38.876: INFO: stdout: "Created e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4\nScaling up e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Apr 26 22:17:38.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9775' Apr 26 22:17:38.977: INFO: stderr: "" Apr 26 22:17:38.977: INFO: stdout: "e2e-test-httpd-rc-7rzhm e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4-vqpdz " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Apr 26 22:17:43.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9775' Apr 26 22:17:44.078: INFO: stderr: "" Apr 26 22:17:44.078: INFO: stdout: "e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4-vqpdz " Apr 26 22:17:44.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4-vqpdz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9775' Apr 26 22:17:44.176: INFO: stderr: "" Apr 26 22:17:44.176: INFO: stdout: "true" Apr 26 22:17:44.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4-vqpdz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9775' Apr 26 22:17:44.273: INFO: stderr: "" Apr 26 22:17:44.273: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Apr 26 22:17:44.273: INFO: e2e-test-httpd-rc-d8e8a387b0d172d8175f025528fcbac4-vqpdz is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Apr 26 22:17:44.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9775' Apr 26 22:17:44.373: INFO: stderr: "" Apr 26 22:17:44.374: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:17:44.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9775" for this suite. • [SLOW TEST:21.960 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":254,"skipped":4111,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:17:44.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 22:17:45.171: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 22:17:47.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536265, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536265, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 26 22:17:49.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536265, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536265, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536265, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 22:17:52.242: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:17:52.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4470-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:17:53.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8951" for this suite. STEP: Destroying namespace "webhook-8951-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.191 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":255,"skipped":4113,"failed":0} [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:17:53.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Apr 26 22:17:53.716: INFO: Waiting up to 5m0s for pod "var-expansion-1afe7ef5-b5f4-4433-8713-a0f4a8d834f9" in namespace "var-expansion-405" to be "success or failure" Apr 26 22:17:53.747: INFO: Pod "var-expansion-1afe7ef5-b5f4-4433-8713-a0f4a8d834f9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.844227ms Apr 26 22:17:55.751: INFO: Pod "var-expansion-1afe7ef5-b5f4-4433-8713-a0f4a8d834f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034789599s Apr 26 22:17:57.755: INFO: Pod "var-expansion-1afe7ef5-b5f4-4433-8713-a0f4a8d834f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038516391s STEP: Saw pod success Apr 26 22:17:57.755: INFO: Pod "var-expansion-1afe7ef5-b5f4-4433-8713-a0f4a8d834f9" satisfied condition "success or failure" Apr 26 22:17:57.758: INFO: Trying to get logs from node jerma-worker pod var-expansion-1afe7ef5-b5f4-4433-8713-a0f4a8d834f9 container dapi-container: STEP: delete the pod Apr 26 22:17:57.777: INFO: Waiting for pod var-expansion-1afe7ef5-b5f4-4433-8713-a0f4a8d834f9 to disappear Apr 26 22:17:57.787: INFO: Pod var-expansion-1afe7ef5-b5f4-4433-8713-a0f4a8d834f9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:17:57.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-405" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4113,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:17:57.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 26 22:17:57.888: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-a b9efaa8c-a6b7-4ce0-bd91-547ec4999483 11299751 0 2020-04-26 22:17:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 26 22:17:57.888: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-a b9efaa8c-a6b7-4ce0-bd91-547ec4999483 11299751 0 2020-04-26 22:17:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 26 22:18:07.897: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-a b9efaa8c-a6b7-4ce0-bd91-547ec4999483 11299805 0 2020-04-26 22:17:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 26 22:18:07.897: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-a b9efaa8c-a6b7-4ce0-bd91-547ec4999483 11299805 0 2020-04-26 22:17:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 26 22:18:17.906: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-a b9efaa8c-a6b7-4ce0-bd91-547ec4999483 11299836 0 2020-04-26 22:17:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 26 22:18:17.906: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-a b9efaa8c-a6b7-4ce0-bd91-547ec4999483 11299836 0 2020-04-26 22:17:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 26 22:18:27.913: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-a b9efaa8c-a6b7-4ce0-bd91-547ec4999483 11299866 0 2020-04-26 22:17:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 26 22:18:27.913: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-a b9efaa8c-a6b7-4ce0-bd91-547ec4999483 11299866 0 2020-04-26 22:17:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 26 22:18:37.920: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-b f1cf24cc-c3f6-419d-bb0a-504869f5df35 11299897 0 2020-04-26 22:18:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 26 22:18:37.921: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-b f1cf24cc-c3f6-419d-bb0a-504869f5df35 11299897 0 2020-04-26 22:18:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 26 22:18:47.946: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-b f1cf24cc-c3f6-419d-bb0a-504869f5df35 11299926 0 2020-04-26 22:18:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 26 22:18:47.946: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7116 /api/v1/namespaces/watch-7116/configmaps/e2e-watch-test-configmap-b f1cf24cc-c3f6-419d-bb0a-504869f5df35 11299926 0 2020-04-26 22:18:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:18:57.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7116" for this suite. • [SLOW TEST:60.161 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":257,"skipped":4122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:18:57.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:19:02.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2862" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":258,"skipped":4148,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:19:02.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 26 22:19:02.575: INFO: Waiting up to 5m0s for pod "pod-f4b3ed85-1f71-442b-be40-56b5d7d75ef3" in namespace "emptydir-5598" to be "success or failure" Apr 26 22:19:02.578: INFO: Pod "pod-f4b3ed85-1f71-442b-be40-56b5d7d75ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.809147ms Apr 26 22:19:04.596: INFO: Pod "pod-f4b3ed85-1f71-442b-be40-56b5d7d75ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021277892s Apr 26 22:19:06.599: INFO: Pod "pod-f4b3ed85-1f71-442b-be40-56b5d7d75ef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024640056s STEP: Saw pod success Apr 26 22:19:06.599: INFO: Pod "pod-f4b3ed85-1f71-442b-be40-56b5d7d75ef3" satisfied condition "success or failure" Apr 26 22:19:06.602: INFO: Trying to get logs from node jerma-worker pod pod-f4b3ed85-1f71-442b-be40-56b5d7d75ef3 container test-container: STEP: delete the pod Apr 26 22:19:06.622: INFO: Waiting for pod pod-f4b3ed85-1f71-442b-be40-56b5d7d75ef3 to disappear Apr 26 22:19:06.632: INFO: Pod pod-f4b3ed85-1f71-442b-be40-56b5d7d75ef3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:19:06.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5598" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4155,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:19:06.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 26 22:19:06.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2970' Apr 26 22:19:06.837: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 26 22:19:06.837: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Apr 26 22:19:08.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2970' Apr 26 22:19:09.099: INFO: stderr: "" Apr 26 22:19:09.099: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:19:09.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2970" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":260,"skipped":4159,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:19:09.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 22:19:10.023: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 22:19:12.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536350, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536350, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536350, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536349, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 22:19:15.080: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:19:15.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4434-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:19:16.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5684" for this suite. STEP: Destroying namespace "webhook-5684-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.199 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":261,"skipped":4176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:19:16.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 26 22:19:16.915: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 26 22:19:19.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536357, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536357, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536357, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536356, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 22:19:22.143: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:19:22.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:19:23.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2050" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.120 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":262,"skipped":4199,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:19:23.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:19:27.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9922" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4204,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:19:27.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6975 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 26 22:19:27.610: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 26 22:19:53.741: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.82 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6975 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:19:53.741: INFO: >>> kubeConfig: /root/.kube/config I0426 22:19:53.777627 6 log.go:172] (0xc001d46bb0) (0xc00226a0a0) Create stream I0426 22:19:53.777682 6 log.go:172] (0xc001d46bb0) (0xc00226a0a0) Stream added, broadcasting: 1 I0426 22:19:53.779392 6 log.go:172] (0xc001d46bb0) Reply frame received for 1 I0426 22:19:53.779436 6 log.go:172] (0xc001d46bb0) (0xc00226a140) Create stream I0426 22:19:53.779459 6 log.go:172] (0xc001d46bb0) (0xc00226a140) Stream added, broadcasting: 3 I0426 22:19:53.780329 6 log.go:172] (0xc001d46bb0) Reply frame received for 3 I0426 22:19:53.780360 6 log.go:172] (0xc001d46bb0) (0xc00226a1e0) Create stream I0426 22:19:53.780372 6 log.go:172] (0xc001d46bb0) (0xc00226a1e0) Stream added, broadcasting: 5 I0426 22:19:53.781630 6 log.go:172] (0xc001d46bb0) Reply frame received for 5 I0426 22:19:54.862551 6 log.go:172] (0xc001d46bb0) Data frame received for 3 I0426 22:19:54.862597 6 log.go:172] (0xc00226a140) (3) Data frame handling I0426 22:19:54.862628 6 log.go:172] (0xc00226a140) (3) Data frame sent I0426 22:19:54.862651 6 log.go:172] (0xc001d46bb0) Data frame received for 3 I0426 22:19:54.862672 6 log.go:172] (0xc00226a140) (3) Data frame handling I0426 22:19:54.862978 6 log.go:172] (0xc001d46bb0) Data frame received for 5 I0426 22:19:54.863001 6 log.go:172] (0xc00226a1e0) (5) Data frame handling I0426 22:19:54.864654 6 log.go:172] (0xc001d46bb0) Data frame received for 1 I0426 22:19:54.864683 6 log.go:172] (0xc00226a0a0) (1) Data frame handling I0426 22:19:54.864701 6 log.go:172] (0xc00226a0a0) (1) Data frame sent I0426 22:19:54.864729 6 log.go:172] (0xc001d46bb0) (0xc00226a0a0) Stream removed, broadcasting: 1 I0426 22:19:54.864812 6 log.go:172] (0xc001d46bb0) Go away received I0426 22:19:54.864860 6 log.go:172] (0xc001d46bb0) (0xc00226a0a0) Stream removed, broadcasting: 1 I0426 22:19:54.864890 6 log.go:172] (0xc001d46bb0) (0xc00226a140) Stream removed, broadcasting: 3 I0426 22:19:54.864916 6 log.go:172] (0xc001d46bb0) (0xc00226a1e0) Stream removed, broadcasting: 5 Apr 26 22:19:54.864: INFO: Found all expected endpoints: [netserver-0] Apr 26 22:19:54.868: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.235 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6975 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 26 22:19:54.868: INFO: >>> kubeConfig: /root/.kube/config I0426 22:19:54.899898 6 log.go:172] (0xc001d47130) (0xc00226a460) Create stream I0426 22:19:54.899928 6 log.go:172] (0xc001d47130) (0xc00226a460) Stream added, broadcasting: 1 I0426 22:19:54.901835 6 log.go:172] (0xc001d47130) Reply frame received for 1 I0426 22:19:54.901871 6 log.go:172] (0xc001d47130) (0xc00295b0e0) Create stream I0426 22:19:54.901885 6 log.go:172] (0xc001d47130) (0xc00295b0e0) Stream added, broadcasting: 3 I0426 22:19:54.902882 6 log.go:172] (0xc001d47130) Reply frame received for 3 I0426 22:19:54.902917 6 log.go:172] (0xc001d47130) (0xc001d82280) Create stream I0426 22:19:54.902930 6 log.go:172] (0xc001d47130) (0xc001d82280) Stream added, broadcasting: 5 I0426 22:19:54.904038 6 log.go:172] (0xc001d47130) Reply frame received for 5 I0426 22:19:55.971663 6 log.go:172] (0xc001d47130) Data frame received for 3 I0426 22:19:55.971694 6 log.go:172] (0xc00295b0e0) (3) Data frame handling I0426 22:19:55.971724 6 log.go:172] (0xc00295b0e0) (3) Data frame sent I0426 22:19:55.971860 6 log.go:172] (0xc001d47130) Data frame received for 5 I0426 22:19:55.971899 6 log.go:172] (0xc001d82280) (5) Data frame handling I0426 22:19:55.972195 6 log.go:172] (0xc001d47130) Data frame received for 3 I0426 22:19:55.972225 6 log.go:172] (0xc00295b0e0) (3) Data frame handling I0426 22:19:55.974195 6 log.go:172] (0xc001d47130) Data frame received for 1 I0426 22:19:55.974235 6 log.go:172] (0xc00226a460) (1) Data frame handling I0426 22:19:55.974287 6 log.go:172] (0xc00226a460) (1) Data frame sent I0426 22:19:55.974326 6 log.go:172] (0xc001d47130) (0xc00226a460) Stream removed, broadcasting: 1 I0426 22:19:55.974376 6 log.go:172] (0xc001d47130) Go away received I0426 22:19:55.974512 6 log.go:172] (0xc001d47130) (0xc00226a460) Stream removed, broadcasting: 1 I0426 22:19:55.974548 6 log.go:172] (0xc001d47130) (0xc00295b0e0) Stream removed, broadcasting: 3 I0426 22:19:55.974568 6 log.go:172] (0xc001d47130) (0xc001d82280) Stream removed, broadcasting: 5 Apr 26 22:19:55.974: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:19:55.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6975" for this suite. • [SLOW TEST:28.424 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:19:55.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-6f0105ca-57d5-4d76-9c4b-ce6f964c59bf in namespace container-probe-2689 Apr 26 22:20:00.089: INFO: Started pod liveness-6f0105ca-57d5-4d76-9c4b-ce6f964c59bf in namespace container-probe-2689 STEP: checking the pod's current state and verifying that restartCount is present Apr 26 22:20:00.091: INFO: Initial restart count of pod liveness-6f0105ca-57d5-4d76-9c4b-ce6f964c59bf is 0 Apr 26 22:20:14.164: INFO: Restart count of pod container-probe-2689/liveness-6f0105ca-57d5-4d76-9c4b-ce6f964c59bf is now 1 (14.072512546s elapsed) Apr 26 22:20:34.205: INFO: Restart count of pod container-probe-2689/liveness-6f0105ca-57d5-4d76-9c4b-ce6f964c59bf is now 2 (34.114039759s elapsed) Apr 26 22:20:54.246: INFO: Restart count of pod container-probe-2689/liveness-6f0105ca-57d5-4d76-9c4b-ce6f964c59bf is now 3 (54.154402427s elapsed) Apr 26 22:21:14.380: INFO: Restart count of pod container-probe-2689/liveness-6f0105ca-57d5-4d76-9c4b-ce6f964c59bf is now 4 (1m14.288899366s elapsed) Apr 26 22:22:16.518: INFO: Restart count of pod container-probe-2689/liveness-6f0105ca-57d5-4d76-9c4b-ce6f964c59bf is now 5 (2m16.426262919s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:22:16.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2689" for this suite. • [SLOW TEST:140.556 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4272,"failed":0} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:22:16.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:22:16.649: INFO: Creating ReplicaSet my-hostname-basic-688f503b-b5b0-43c3-b9d2-e9fd74a61f7a Apr 26 22:22:16.782: INFO: Pod name my-hostname-basic-688f503b-b5b0-43c3-b9d2-e9fd74a61f7a: Found 0 pods out of 1 Apr 26 22:22:21.786: INFO: Pod name my-hostname-basic-688f503b-b5b0-43c3-b9d2-e9fd74a61f7a: Found 1 pods out of 1 Apr 26 22:22:21.786: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-688f503b-b5b0-43c3-b9d2-e9fd74a61f7a" is running Apr 26 22:22:21.788: INFO: Pod "my-hostname-basic-688f503b-b5b0-43c3-b9d2-e9fd74a61f7a-4gnq8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 22:22:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 22:22:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 22:22:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-26 22:22:16 +0000 UTC Reason: Message:}]) Apr 26 22:22:21.788: INFO: Trying to dial the pod Apr 26 22:22:26.799: INFO: Controller my-hostname-basic-688f503b-b5b0-43c3-b9d2-e9fd74a61f7a: Got expected result from replica 1 [my-hostname-basic-688f503b-b5b0-43c3-b9d2-e9fd74a61f7a-4gnq8]: "my-hostname-basic-688f503b-b5b0-43c3-b9d2-e9fd74a61f7a-4gnq8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:22:26.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7774" for this suite. • [SLOW TEST:10.266 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":266,"skipped":4272,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:22:26.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 26 22:22:26.883: INFO: Waiting up to 5m0s for pod "pod-7a864947-2026-443e-9013-2cd7548e0136" in namespace "emptydir-3792" to be "success or failure" Apr 26 22:22:26.890: INFO: Pod "pod-7a864947-2026-443e-9013-2cd7548e0136": Phase="Pending", Reason="", readiness=false. Elapsed: 7.18509ms Apr 26 22:22:28.899: INFO: Pod "pod-7a864947-2026-443e-9013-2cd7548e0136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016315397s Apr 26 22:22:30.903: INFO: Pod "pod-7a864947-2026-443e-9013-2cd7548e0136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020286481s STEP: Saw pod success Apr 26 22:22:30.903: INFO: Pod "pod-7a864947-2026-443e-9013-2cd7548e0136" satisfied condition "success or failure" Apr 26 22:22:30.906: INFO: Trying to get logs from node jerma-worker2 pod pod-7a864947-2026-443e-9013-2cd7548e0136 container test-container: STEP: delete the pod Apr 26 22:22:30.940: INFO: Waiting for pod pod-7a864947-2026-443e-9013-2cd7548e0136 to disappear Apr 26 22:22:30.952: INFO: Pod pod-7a864947-2026-443e-9013-2cd7548e0136 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:22:30.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3792" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:22:30.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 26 22:22:37.912: INFO: 0 pods remaining Apr 26 22:22:37.912: INFO: 0 pods has nil DeletionTimestamp Apr 26 22:22:37.912: INFO: STEP: Gathering metrics W0426 22:22:39.431410 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 26 22:22:39.431: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:22:39.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1542" for this suite. • [SLOW TEST:9.089 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":268,"skipped":4398,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:22:40.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 26 22:22:41.840: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 26 22:22:43.849: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536561, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536561, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536561, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723536561, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 26 22:22:46.905: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:22:46.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2372-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:22:48.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4123" for this suite. STEP: Destroying namespace "webhook-4123-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.116 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":269,"skipped":4408,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:22:48.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 26 22:22:48.240: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6457 I0426 22:22:48.254063 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6457, replica count: 1 I0426 22:22:49.304464 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 22:22:50.304645 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 22:22:51.304847 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 26 22:22:51.492: INFO: Created: latency-svc-b2z9s Apr 26 22:22:51.499: INFO: Got endpoints: latency-svc-b2z9s [94.062332ms] Apr 26 22:22:51.536: INFO: Created: latency-svc-l2nrq Apr 26 22:22:51.552: INFO: Got endpoints: latency-svc-l2nrq [53.205129ms] Apr 26 22:22:51.660: INFO: Created: latency-svc-627wz Apr 26 22:22:51.668: INFO: Got endpoints: latency-svc-627wz [169.145563ms] Apr 26 22:22:51.698: INFO: Created: latency-svc-xd9jd Apr 26 22:22:51.726: INFO: Got endpoints: latency-svc-xd9jd [226.760262ms] Apr 26 22:22:51.810: INFO: Created: latency-svc-ck59j Apr 26 22:22:51.813: INFO: Got endpoints: latency-svc-ck59j [313.789623ms] Apr 26 22:22:51.842: INFO: Created: latency-svc-jcp84 Apr 26 22:22:51.874: INFO: Got endpoints: latency-svc-jcp84 [375.360212ms] Apr 26 22:22:51.899: INFO: Created: latency-svc-bpltb Apr 26 22:22:51.953: INFO: Got endpoints: latency-svc-bpltb [453.63414ms] Apr 26 22:22:51.980: INFO: Created: latency-svc-k89l9 Apr 26 22:22:51.996: INFO: Got endpoints: latency-svc-k89l9 [497.547773ms] Apr 26 22:22:52.022: INFO: Created: latency-svc-7g65j Apr 26 22:22:52.032: INFO: Got endpoints: latency-svc-7g65j [533.319283ms] Apr 26 22:22:52.109: INFO: Created: latency-svc-hztlf Apr 26 22:22:52.124: INFO: Got endpoints: latency-svc-hztlf [624.922109ms] Apr 26 22:22:52.170: INFO: Created: latency-svc-mkt2z Apr 26 22:22:52.183: INFO: Got endpoints: latency-svc-mkt2z [683.480628ms] Apr 26 22:22:52.209: INFO: Created: latency-svc-h9b8v Apr 26 22:22:52.270: INFO: Got endpoints: latency-svc-h9b8v [771.00127ms] Apr 26 22:22:52.273: INFO: Created: latency-svc-km699 Apr 26 22:22:52.283: INFO: Got endpoints: latency-svc-km699 [783.317292ms] Apr 26 22:22:52.307: INFO: Created: latency-svc-qqq8h Apr 26 22:22:52.326: INFO: Got endpoints: latency-svc-qqq8h [826.471978ms] Apr 26 22:22:52.350: INFO: Created: latency-svc-7r7sm Apr 26 22:22:52.364: INFO: Got endpoints: latency-svc-7r7sm [864.880561ms] Apr 26 22:22:52.434: INFO: Created: latency-svc-2lqsw Apr 26 22:22:52.472: INFO: Got endpoints: latency-svc-2lqsw [972.829284ms] Apr 26 22:22:52.508: INFO: Created: latency-svc-8ps5w Apr 26 22:22:52.520: INFO: Got endpoints: latency-svc-8ps5w [968.018189ms] Apr 26 22:22:52.582: INFO: Created: latency-svc-6b4jl Apr 26 22:22:52.592: INFO: Got endpoints: latency-svc-6b4jl [924.284076ms] Apr 26 22:22:52.616: INFO: Created: latency-svc-26z88 Apr 26 22:22:52.635: INFO: Got endpoints: latency-svc-26z88 [909.161951ms] Apr 26 22:22:52.658: INFO: Created: latency-svc-67fl8 Apr 26 22:22:52.706: INFO: Got endpoints: latency-svc-67fl8 [893.182851ms] Apr 26 22:22:52.751: INFO: Created: latency-svc-5b9lp Apr 26 22:22:52.764: INFO: Got endpoints: latency-svc-5b9lp [889.947712ms] Apr 26 22:22:52.788: INFO: Created: latency-svc-zfcwg Apr 26 22:22:52.875: INFO: Got endpoints: latency-svc-zfcwg [922.386514ms] Apr 26 22:22:52.876: INFO: Created: latency-svc-fv786 Apr 26 22:22:52.899: INFO: Got endpoints: latency-svc-fv786 [902.216184ms] Apr 26 22:22:52.939: INFO: Created: latency-svc-l5hwk Apr 26 22:22:53.129: INFO: Got endpoints: latency-svc-l5hwk [1.096184569s] Apr 26 22:22:53.135: INFO: Created: latency-svc-5rz2d Apr 26 22:22:53.178: INFO: Got endpoints: latency-svc-5rz2d [1.054260381s] Apr 26 22:22:53.288: INFO: Created: latency-svc-7gtjq Apr 26 22:22:53.310: INFO: Got endpoints: latency-svc-7gtjq [1.127672255s] Apr 26 22:22:53.348: INFO: Created: latency-svc-d4jcq Apr 26 22:22:53.365: INFO: Got endpoints: latency-svc-d4jcq [1.095002511s] Apr 26 22:22:53.426: INFO: Created: latency-svc-zcwx2 Apr 26 22:22:53.431: INFO: Got endpoints: latency-svc-zcwx2 [1.148365767s] Apr 26 22:22:53.459: INFO: Created: latency-svc-6tqm5 Apr 26 22:22:53.473: INFO: Got endpoints: latency-svc-6tqm5 [1.147551456s] Apr 26 22:22:53.502: INFO: Created: latency-svc-rh8nn Apr 26 22:22:53.564: INFO: Got endpoints: latency-svc-rh8nn [1.199501712s] Apr 26 22:22:53.594: INFO: Created: latency-svc-frls6 Apr 26 22:22:53.612: INFO: Got endpoints: latency-svc-frls6 [1.139941173s] Apr 26 22:22:53.648: INFO: Created: latency-svc-dj8qk Apr 26 22:22:53.657: INFO: Got endpoints: latency-svc-dj8qk [1.13677427s] Apr 26 22:22:53.713: INFO: Created: latency-svc-8qrnk Apr 26 22:22:53.730: INFO: Got endpoints: latency-svc-8qrnk [1.137422966s] Apr 26 22:22:53.786: INFO: Created: latency-svc-vwglz Apr 26 22:22:53.810: INFO: Got endpoints: latency-svc-vwglz [1.17487398s] Apr 26 22:22:53.852: INFO: Created: latency-svc-szjbj Apr 26 22:22:53.868: INFO: Got endpoints: latency-svc-szjbj [1.16175693s] Apr 26 22:22:53.904: INFO: Created: latency-svc-7b5hd Apr 26 22:22:53.928: INFO: Got endpoints: latency-svc-7b5hd [1.163620242s] Apr 26 22:22:53.978: INFO: Created: latency-svc-cqxnk Apr 26 22:22:53.988: INFO: Got endpoints: latency-svc-cqxnk [1.112811317s] Apr 26 22:22:54.019: INFO: Created: latency-svc-p2c5v Apr 26 22:22:54.061: INFO: Got endpoints: latency-svc-p2c5v [1.162763646s] Apr 26 22:22:54.121: INFO: Created: latency-svc-9xhqk Apr 26 22:22:54.139: INFO: Got endpoints: latency-svc-9xhqk [1.009813412s] Apr 26 22:22:54.167: INFO: Created: latency-svc-m7w8n Apr 26 22:22:54.181: INFO: Got endpoints: latency-svc-m7w8n [1.002530751s] Apr 26 22:22:54.258: INFO: Created: latency-svc-8zkj6 Apr 26 22:22:54.283: INFO: Got endpoints: latency-svc-8zkj6 [972.847358ms] Apr 26 22:22:54.284: INFO: Created: latency-svc-bplqh Apr 26 22:22:54.314: INFO: Got endpoints: latency-svc-bplqh [948.320725ms] Apr 26 22:22:54.349: INFO: Created: latency-svc-tw9st Apr 26 22:22:54.408: INFO: Got endpoints: latency-svc-tw9st [976.773174ms] Apr 26 22:22:54.450: INFO: Created: latency-svc-p25cj Apr 26 22:22:54.494: INFO: Got endpoints: latency-svc-p25cj [1.02069264s] Apr 26 22:22:54.572: INFO: Created: latency-svc-2qnwj Apr 26 22:22:54.605: INFO: Got endpoints: latency-svc-2qnwj [1.041477328s] Apr 26 22:22:54.606: INFO: Created: latency-svc-4pzfg Apr 26 22:22:54.635: INFO: Got endpoints: latency-svc-4pzfg [1.022757942s] Apr 26 22:22:54.729: INFO: Created: latency-svc-fs4j2 Apr 26 22:22:54.730: INFO: Got endpoints: latency-svc-fs4j2 [1.073149293s] Apr 26 22:22:54.782: INFO: Created: latency-svc-j5gcg Apr 26 22:22:54.801: INFO: Got endpoints: latency-svc-j5gcg [1.071321221s] Apr 26 22:22:54.863: INFO: Created: latency-svc-drcg8 Apr 26 22:22:54.865: INFO: Got endpoints: latency-svc-drcg8 [1.055325167s] Apr 26 22:22:54.899: INFO: Created: latency-svc-7frsb Apr 26 22:22:54.929: INFO: Got endpoints: latency-svc-7frsb [1.06099976s] Apr 26 22:22:55.001: INFO: Created: latency-svc-2rd76 Apr 26 22:22:55.033: INFO: Got endpoints: latency-svc-2rd76 [1.104901432s] Apr 26 22:22:55.087: INFO: Created: latency-svc-k2rlq Apr 26 22:22:55.139: INFO: Got endpoints: latency-svc-k2rlq [1.150495497s] Apr 26 22:22:55.175: INFO: Created: latency-svc-zqcjk Apr 26 22:22:55.192: INFO: Got endpoints: latency-svc-zqcjk [1.130582499s] Apr 26 22:22:55.217: INFO: Created: latency-svc-b4x9l Apr 26 22:22:55.234: INFO: Got endpoints: latency-svc-b4x9l [1.095364475s] Apr 26 22:22:55.283: INFO: Created: latency-svc-4jgr7 Apr 26 22:22:55.297: INFO: Got endpoints: latency-svc-4jgr7 [1.115968798s] Apr 26 22:22:55.333: INFO: Created: latency-svc-cj9hq Apr 26 22:22:55.351: INFO: Got endpoints: latency-svc-cj9hq [1.067299683s] Apr 26 22:22:55.446: INFO: Created: latency-svc-q554h Apr 26 22:22:55.463: INFO: Got endpoints: latency-svc-q554h [1.14926263s] Apr 26 22:22:55.513: INFO: Created: latency-svc-p44kk Apr 26 22:22:55.529: INFO: Got endpoints: latency-svc-p44kk [1.121196786s] Apr 26 22:22:55.594: INFO: Created: latency-svc-2gvpp Apr 26 22:22:55.602: INFO: Got endpoints: latency-svc-2gvpp [1.107583114s] Apr 26 22:22:55.631: INFO: Created: latency-svc-q8rn4 Apr 26 22:22:55.644: INFO: Got endpoints: latency-svc-q8rn4 [1.038336403s] Apr 26 22:22:55.693: INFO: Created: latency-svc-km4vt Apr 26 22:22:55.737: INFO: Got endpoints: latency-svc-km4vt [1.102262587s] Apr 26 22:22:55.765: INFO: Created: latency-svc-r2j6g Apr 26 22:22:55.787: INFO: Got endpoints: latency-svc-r2j6g [1.056711722s] Apr 26 22:22:55.822: INFO: Created: latency-svc-rk5qs Apr 26 22:22:55.869: INFO: Got endpoints: latency-svc-rk5qs [1.067717457s] Apr 26 22:22:55.897: INFO: Created: latency-svc-ktd9l Apr 26 22:22:55.914: INFO: Got endpoints: latency-svc-ktd9l [1.048960483s] Apr 26 22:22:55.945: INFO: Created: latency-svc-7r6m2 Apr 26 22:22:56.018: INFO: Got endpoints: latency-svc-7r6m2 [1.089465299s] Apr 26 22:22:56.020: INFO: Created: latency-svc-ptt47 Apr 26 22:22:56.035: INFO: Got endpoints: latency-svc-ptt47 [1.001451419s] Apr 26 22:22:56.057: INFO: Created: latency-svc-w6tcx Apr 26 22:22:56.077: INFO: Got endpoints: latency-svc-w6tcx [938.152842ms] Apr 26 22:22:56.216: INFO: Created: latency-svc-s2rkk Apr 26 22:22:56.227: INFO: Got endpoints: latency-svc-s2rkk [1.035108475s] Apr 26 22:22:56.250: INFO: Created: latency-svc-2d58h Apr 26 22:22:56.264: INFO: Got endpoints: latency-svc-2d58h [1.029450488s] Apr 26 22:22:56.324: INFO: Created: latency-svc-l4vfh Apr 26 22:22:56.330: INFO: Got endpoints: latency-svc-l4vfh [1.03296914s] Apr 26 22:22:56.351: INFO: Created: latency-svc-nr4j6 Apr 26 22:22:56.366: INFO: Got endpoints: latency-svc-nr4j6 [1.015727793s] Apr 26 22:22:56.386: INFO: Created: latency-svc-p8652 Apr 26 22:22:56.403: INFO: Got endpoints: latency-svc-p8652 [939.647615ms] Apr 26 22:22:56.474: INFO: Created: latency-svc-kwjtw Apr 26 22:22:56.487: INFO: Got endpoints: latency-svc-kwjtw [957.680783ms] Apr 26 22:22:56.525: INFO: Created: latency-svc-w9m2g Apr 26 22:22:56.541: INFO: Got endpoints: latency-svc-w9m2g [938.736688ms] Apr 26 22:22:56.566: INFO: Created: latency-svc-mbvrs Apr 26 22:22:56.629: INFO: Got endpoints: latency-svc-mbvrs [985.646387ms] Apr 26 22:22:56.631: INFO: Created: latency-svc-cgxm4 Apr 26 22:22:56.637: INFO: Got endpoints: latency-svc-cgxm4 [899.872299ms] Apr 26 22:22:56.659: INFO: Created: latency-svc-588sk Apr 26 22:22:56.673: INFO: Got endpoints: latency-svc-588sk [886.56902ms] Apr 26 22:22:56.695: INFO: Created: latency-svc-f8km4 Apr 26 22:22:56.704: INFO: Got endpoints: latency-svc-f8km4 [835.288112ms] Apr 26 22:22:56.725: INFO: Created: latency-svc-mc9m6 Apr 26 22:22:56.761: INFO: Got endpoints: latency-svc-mc9m6 [846.549937ms] Apr 26 22:22:56.789: INFO: Created: latency-svc-nfbc7 Apr 26 22:22:56.806: INFO: Got endpoints: latency-svc-nfbc7 [787.776557ms] Apr 26 22:22:56.831: INFO: Created: latency-svc-hzgkr Apr 26 22:22:56.849: INFO: Got endpoints: latency-svc-hzgkr [814.109864ms] Apr 26 22:22:56.892: INFO: Created: latency-svc-m98nj Apr 26 22:22:56.896: INFO: Got endpoints: latency-svc-m98nj [818.719531ms] Apr 26 22:22:56.929: INFO: Created: latency-svc-d2xrb Apr 26 22:22:56.957: INFO: Got endpoints: latency-svc-d2xrb [730.130449ms] Apr 26 22:22:56.987: INFO: Created: latency-svc-b5hqq Apr 26 22:22:57.048: INFO: Got endpoints: latency-svc-b5hqq [784.742809ms] Apr 26 22:22:57.052: INFO: Created: latency-svc-5d528 Apr 26 22:22:57.066: INFO: Got endpoints: latency-svc-5d528 [735.803376ms] Apr 26 22:22:57.103: INFO: Created: latency-svc-b4r4h Apr 26 22:22:57.120: INFO: Got endpoints: latency-svc-b4r4h [753.640535ms] Apr 26 22:22:57.145: INFO: Created: latency-svc-zkcdp Apr 26 22:22:57.180: INFO: Got endpoints: latency-svc-zkcdp [777.578095ms] Apr 26 22:22:57.196: INFO: Created: latency-svc-cdv2w Apr 26 22:22:57.211: INFO: Got endpoints: latency-svc-cdv2w [724.011884ms] Apr 26 22:22:57.233: INFO: Created: latency-svc-8lw4x Apr 26 22:22:57.248: INFO: Got endpoints: latency-svc-8lw4x [707.241066ms] Apr 26 22:22:57.268: INFO: Created: latency-svc-8rwqh Apr 26 22:22:57.324: INFO: Got endpoints: latency-svc-8rwqh [694.379088ms] Apr 26 22:22:57.326: INFO: Created: latency-svc-5jsdf Apr 26 22:22:57.338: INFO: Got endpoints: latency-svc-5jsdf [701.026857ms] Apr 26 22:22:57.385: INFO: Created: latency-svc-9f9m5 Apr 26 22:22:57.397: INFO: Got endpoints: latency-svc-9f9m5 [723.882426ms] Apr 26 22:22:57.418: INFO: Created: latency-svc-6dsnn Apr 26 22:22:57.480: INFO: Got endpoints: latency-svc-6dsnn [775.331071ms] Apr 26 22:22:57.508: INFO: Created: latency-svc-gpxxw Apr 26 22:22:57.540: INFO: Got endpoints: latency-svc-gpxxw [779.40766ms] Apr 26 22:22:57.570: INFO: Created: latency-svc-ksphf Apr 26 22:22:57.630: INFO: Got endpoints: latency-svc-ksphf [823.351507ms] Apr 26 22:22:57.665: INFO: Created: latency-svc-wl2hj Apr 26 22:22:57.695: INFO: Got endpoints: latency-svc-wl2hj [845.749861ms] Apr 26 22:22:57.773: INFO: Created: latency-svc-6f2cv Apr 26 22:22:57.776: INFO: Got endpoints: latency-svc-6f2cv [880.558645ms] Apr 26 22:22:57.804: INFO: Created: latency-svc-lxtmq Apr 26 22:22:57.813: INFO: Got endpoints: latency-svc-lxtmq [855.450065ms] Apr 26 22:22:57.834: INFO: Created: latency-svc-dfnc9 Apr 26 22:22:57.843: INFO: Got endpoints: latency-svc-dfnc9 [794.567536ms] Apr 26 22:22:57.869: INFO: Created: latency-svc-l2sws Apr 26 22:22:57.910: INFO: Got endpoints: latency-svc-l2sws [844.454163ms] Apr 26 22:22:57.922: INFO: Created: latency-svc-mk5bd Apr 26 22:22:57.940: INFO: Got endpoints: latency-svc-mk5bd [819.765874ms] Apr 26 22:22:57.961: INFO: Created: latency-svc-n6kw6 Apr 26 22:22:57.976: INFO: Got endpoints: latency-svc-n6kw6 [795.964462ms] Apr 26 22:22:57.997: INFO: Created: latency-svc-html5 Apr 26 22:22:58.007: INFO: Got endpoints: latency-svc-html5 [795.799417ms] Apr 26 22:22:58.073: INFO: Created: latency-svc-sj2nk Apr 26 22:22:58.076: INFO: Got endpoints: latency-svc-sj2nk [827.632497ms] Apr 26 22:22:58.145: INFO: Created: latency-svc-j86qx Apr 26 22:22:58.234: INFO: Got endpoints: latency-svc-j86qx [910.646131ms] Apr 26 22:22:58.237: INFO: Created: latency-svc-v5cd2 Apr 26 22:22:58.248: INFO: Got endpoints: latency-svc-v5cd2 [909.558883ms] Apr 26 22:22:58.272: INFO: Created: latency-svc-ts6t8 Apr 26 22:22:58.290: INFO: Got endpoints: latency-svc-ts6t8 [892.160208ms] Apr 26 22:22:58.320: INFO: Created: latency-svc-55bkh Apr 26 22:22:58.332: INFO: Got endpoints: latency-svc-55bkh [851.950495ms] Apr 26 22:22:58.378: INFO: Created: latency-svc-shnk9 Apr 26 22:22:58.386: INFO: Got endpoints: latency-svc-shnk9 [845.493755ms] Apr 26 22:22:58.408: INFO: Created: latency-svc-vjx6m Apr 26 22:22:58.422: INFO: Got endpoints: latency-svc-vjx6m [792.415146ms] Apr 26 22:22:58.446: INFO: Created: latency-svc-h6t48 Apr 26 22:22:58.465: INFO: Got endpoints: latency-svc-h6t48 [770.269757ms] Apr 26 22:22:58.516: INFO: Created: latency-svc-5qzrp Apr 26 22:22:58.531: INFO: Got endpoints: latency-svc-5qzrp [754.472615ms] Apr 26 22:22:58.558: INFO: Created: latency-svc-kl92x Apr 26 22:22:58.573: INFO: Got endpoints: latency-svc-kl92x [760.000863ms] Apr 26 22:22:58.594: INFO: Created: latency-svc-sh746 Apr 26 22:22:58.609: INFO: Got endpoints: latency-svc-sh746 [766.397042ms] Apr 26 22:22:58.654: INFO: Created: latency-svc-hznf7 Apr 26 22:22:58.657: INFO: Got endpoints: latency-svc-hznf7 [746.974654ms] Apr 26 22:22:58.687: INFO: Created: latency-svc-wl45b Apr 26 22:22:58.700: INFO: Got endpoints: latency-svc-wl45b [759.970609ms] Apr 26 22:22:58.722: INFO: Created: latency-svc-tgns6 Apr 26 22:22:58.736: INFO: Got endpoints: latency-svc-tgns6 [759.882139ms] Apr 26 22:22:58.797: INFO: Created: latency-svc-2zld9 Apr 26 22:22:58.840: INFO: Got endpoints: latency-svc-2zld9 [833.317008ms] Apr 26 22:22:58.840: INFO: Created: latency-svc-rp8x7 Apr 26 22:22:58.857: INFO: Got endpoints: latency-svc-rp8x7 [781.344238ms] Apr 26 22:22:58.882: INFO: Created: latency-svc-bwj6q Apr 26 22:22:58.953: INFO: Got endpoints: latency-svc-bwj6q [718.310645ms] Apr 26 22:22:58.955: INFO: Created: latency-svc-sk6zp Apr 26 22:22:58.965: INFO: Got endpoints: latency-svc-sk6zp [717.523761ms] Apr 26 22:22:58.989: INFO: Created: latency-svc-sshwx Apr 26 22:22:59.002: INFO: Got endpoints: latency-svc-sshwx [712.507223ms] Apr 26 22:22:59.044: INFO: Created: latency-svc-927ht Apr 26 22:22:59.093: INFO: Got endpoints: latency-svc-927ht [761.403112ms] Apr 26 22:22:59.116: INFO: Created: latency-svc-wmzdj Apr 26 22:22:59.147: INFO: Got endpoints: latency-svc-wmzdj [760.557832ms] Apr 26 22:22:59.172: INFO: Created: latency-svc-2vz2h Apr 26 22:22:59.188: INFO: Got endpoints: latency-svc-2vz2h [766.320859ms] Apr 26 22:22:59.234: INFO: Created: latency-svc-fjd8d Apr 26 22:22:59.242: INFO: Got endpoints: latency-svc-fjd8d [777.579327ms] Apr 26 22:22:59.266: INFO: Created: latency-svc-7vd2q Apr 26 22:22:59.279: INFO: Got endpoints: latency-svc-7vd2q [747.766554ms] Apr 26 22:22:59.303: INFO: Created: latency-svc-4m7fx Apr 26 22:22:59.315: INFO: Got endpoints: latency-svc-4m7fx [742.083065ms] Apr 26 22:22:59.334: INFO: Created: latency-svc-xhjhm Apr 26 22:22:59.387: INFO: Created: latency-svc-v7sc9 Apr 26 22:22:59.424: INFO: Got endpoints: latency-svc-xhjhm [814.305786ms] Apr 26 22:22:59.424: INFO: Got endpoints: latency-svc-v7sc9 [766.700431ms] Apr 26 22:22:59.448: INFO: Created: latency-svc-6cg6k Apr 26 22:22:59.482: INFO: Got endpoints: latency-svc-6cg6k [782.206716ms] Apr 26 22:22:59.536: INFO: Created: latency-svc-kmtch Apr 26 22:22:59.550: INFO: Got endpoints: latency-svc-kmtch [814.170021ms] Apr 26 22:22:59.572: INFO: Created: latency-svc-cbwqf Apr 26 22:22:59.586: INFO: Got endpoints: latency-svc-cbwqf [746.314427ms] Apr 26 22:22:59.610: INFO: Created: latency-svc-mh2ft Apr 26 22:22:59.653: INFO: Got endpoints: latency-svc-mh2ft [796.025203ms] Apr 26 22:22:59.666: INFO: Created: latency-svc-g6rl4 Apr 26 22:22:59.689: INFO: Got endpoints: latency-svc-g6rl4 [736.504122ms] Apr 26 22:22:59.718: INFO: Created: latency-svc-jf4xd Apr 26 22:22:59.731: INFO: Got endpoints: latency-svc-jf4xd [765.916212ms] Apr 26 22:22:59.752: INFO: Created: latency-svc-7hl5g Apr 26 22:22:59.803: INFO: Got endpoints: latency-svc-7hl5g [800.432945ms] Apr 26 22:22:59.817: INFO: Created: latency-svc-kbzgn Apr 26 22:22:59.834: INFO: Got endpoints: latency-svc-kbzgn [740.886232ms] Apr 26 22:22:59.862: INFO: Created: latency-svc-jdsk8 Apr 26 22:22:59.876: INFO: Got endpoints: latency-svc-jdsk8 [729.34732ms] Apr 26 22:22:59.935: INFO: Created: latency-svc-9nxb7 Apr 26 22:22:59.937: INFO: Got endpoints: latency-svc-9nxb7 [748.99487ms] Apr 26 22:22:59.998: INFO: Created: latency-svc-wxjbz Apr 26 22:23:00.034: INFO: Got endpoints: latency-svc-wxjbz [790.996706ms] Apr 26 22:23:00.103: INFO: Created: latency-svc-thp5m Apr 26 22:23:00.120: INFO: Got endpoints: latency-svc-thp5m [840.899713ms] Apr 26 22:23:00.150: INFO: Created: latency-svc-9hr9z Apr 26 22:23:00.264: INFO: Got endpoints: latency-svc-9hr9z [948.979215ms] Apr 26 22:23:00.268: INFO: Created: latency-svc-xvjqx Apr 26 22:23:00.273: INFO: Got endpoints: latency-svc-xvjqx [848.822502ms] Apr 26 22:23:00.294: INFO: Created: latency-svc-tl5tf Apr 26 22:23:00.303: INFO: Got endpoints: latency-svc-tl5tf [879.211935ms] Apr 26 22:23:00.330: INFO: Created: latency-svc-lsnnp Apr 26 22:23:00.334: INFO: Got endpoints: latency-svc-lsnnp [851.276051ms] Apr 26 22:23:00.360: INFO: Created: latency-svc-8bb9m Apr 26 22:23:00.364: INFO: Got endpoints: latency-svc-8bb9m [813.439702ms] Apr 26 22:23:00.410: INFO: Created: latency-svc-7tc7h Apr 26 22:23:00.418: INFO: Got endpoints: latency-svc-7tc7h [831.803348ms] Apr 26 22:23:00.445: INFO: Created: latency-svc-m98qv Apr 26 22:23:00.455: INFO: Got endpoints: latency-svc-m98qv [801.542424ms] Apr 26 22:23:00.480: INFO: Created: latency-svc-b9pfr Apr 26 22:23:00.497: INFO: Got endpoints: latency-svc-b9pfr [807.797322ms] Apr 26 22:23:00.552: INFO: Created: latency-svc-qrxzl Apr 26 22:23:00.557: INFO: Got endpoints: latency-svc-qrxzl [825.775362ms] Apr 26 22:23:00.586: INFO: Created: latency-svc-gxd8v Apr 26 22:23:00.599: INFO: Got endpoints: latency-svc-gxd8v [796.649749ms] Apr 26 22:23:00.628: INFO: Created: latency-svc-mz52t Apr 26 22:23:00.642: INFO: Got endpoints: latency-svc-mz52t [808.059213ms] Apr 26 22:23:00.696: INFO: Created: latency-svc-987ng Apr 26 22:23:00.698: INFO: Got endpoints: latency-svc-987ng [822.355691ms] Apr 26 22:23:00.732: INFO: Created: latency-svc-dsqmk Apr 26 22:23:00.745: INFO: Got endpoints: latency-svc-dsqmk [807.394973ms] Apr 26 22:23:00.769: INFO: Created: latency-svc-wdgz8 Apr 26 22:23:00.787: INFO: Got endpoints: latency-svc-wdgz8 [752.987733ms] Apr 26 22:23:00.838: INFO: Created: latency-svc-94m96 Apr 26 22:23:00.859: INFO: Got endpoints: latency-svc-94m96 [739.402972ms] Apr 26 22:23:00.886: INFO: Created: latency-svc-7l7wp Apr 26 22:23:00.901: INFO: Got endpoints: latency-svc-7l7wp [636.751564ms] Apr 26 22:23:00.924: INFO: Created: latency-svc-tnztn Apr 26 22:23:00.995: INFO: Got endpoints: latency-svc-tnztn [721.666267ms] Apr 26 22:23:00.998: INFO: Created: latency-svc-8c4dl Apr 26 22:23:01.016: INFO: Got endpoints: latency-svc-8c4dl [712.578379ms] Apr 26 22:23:01.048: INFO: Created: latency-svc-qtqpt Apr 26 22:23:01.076: INFO: Got endpoints: latency-svc-qtqpt [742.603339ms] Apr 26 22:23:01.138: INFO: Created: latency-svc-p2dtm Apr 26 22:23:01.141: INFO: Got endpoints: latency-svc-p2dtm [777.301893ms] Apr 26 22:23:01.170: INFO: Created: latency-svc-v5h2l Apr 26 22:23:01.184: INFO: Got endpoints: latency-svc-v5h2l [765.907357ms] Apr 26 22:23:01.206: INFO: Created: latency-svc-bpjqt Apr 26 22:23:01.214: INFO: Got endpoints: latency-svc-bpjqt [759.553667ms] Apr 26 22:23:01.236: INFO: Created: latency-svc-wtmsb Apr 26 22:23:01.300: INFO: Got endpoints: latency-svc-wtmsb [802.942061ms] Apr 26 22:23:01.312: INFO: Created: latency-svc-4qn99 Apr 26 22:23:01.317: INFO: Got endpoints: latency-svc-4qn99 [759.695665ms] Apr 26 22:23:01.342: INFO: Created: latency-svc-h5klr Apr 26 22:23:01.359: INFO: Got endpoints: latency-svc-h5klr [760.088108ms] Apr 26 22:23:01.452: INFO: Created: latency-svc-8br66 Apr 26 22:23:01.474: INFO: Got endpoints: latency-svc-8br66 [831.681055ms] Apr 26 22:23:01.546: INFO: Created: latency-svc-2jlzl Apr 26 22:23:01.593: INFO: Got endpoints: latency-svc-2jlzl [894.996153ms] Apr 26 22:23:01.608: INFO: Created: latency-svc-nvzvg Apr 26 22:23:01.624: INFO: Got endpoints: latency-svc-nvzvg [879.325997ms] Apr 26 22:23:01.650: INFO: Created: latency-svc-ksdp6 Apr 26 22:23:01.661: INFO: Got endpoints: latency-svc-ksdp6 [874.111294ms] Apr 26 22:23:01.680: INFO: Created: latency-svc-wnmp5 Apr 26 22:23:01.691: INFO: Got endpoints: latency-svc-wnmp5 [831.527917ms] Apr 26 22:23:01.755: INFO: Created: latency-svc-6z4b8 Apr 26 22:23:01.763: INFO: Got endpoints: latency-svc-6z4b8 [861.902123ms] Apr 26 22:23:01.792: INFO: Created: latency-svc-6wm8f Apr 26 22:23:01.806: INFO: Got endpoints: latency-svc-6wm8f [811.005428ms] Apr 26 22:23:01.823: INFO: Created: latency-svc-j8php Apr 26 22:23:01.917: INFO: Got endpoints: latency-svc-j8php [901.409103ms] Apr 26 22:23:01.938: INFO: Created: latency-svc-s8hht Apr 26 22:23:01.962: INFO: Got endpoints: latency-svc-s8hht [885.83979ms] Apr 26 22:23:01.992: INFO: Created: latency-svc-dp4tv Apr 26 22:23:02.010: INFO: Got endpoints: latency-svc-dp4tv [868.54307ms] Apr 26 22:23:02.122: INFO: Created: latency-svc-t7dhw Apr 26 22:23:02.160: INFO: Got endpoints: latency-svc-t7dhw [975.160945ms] Apr 26 22:23:02.196: INFO: Created: latency-svc-g8hqr Apr 26 22:23:02.212: INFO: Got endpoints: latency-svc-g8hqr [997.548146ms] Apr 26 22:23:02.250: INFO: Created: latency-svc-8pzpv Apr 26 22:23:02.267: INFO: Got endpoints: latency-svc-8pzpv [966.762441ms] Apr 26 22:23:02.284: INFO: Created: latency-svc-rpzfp Apr 26 22:23:02.297: INFO: Got endpoints: latency-svc-rpzfp [980.217708ms] Apr 26 22:23:02.320: INFO: Created: latency-svc-v2b4z Apr 26 22:23:02.339: INFO: Got endpoints: latency-svc-v2b4z [979.248819ms] Apr 26 22:23:02.384: INFO: Created: latency-svc-zjwqk Apr 26 22:23:02.393: INFO: Got endpoints: latency-svc-zjwqk [919.393215ms] Apr 26 22:23:02.412: INFO: Created: latency-svc-vcq2q Apr 26 22:23:02.423: INFO: Got endpoints: latency-svc-vcq2q [829.609352ms] Apr 26 22:23:02.452: INFO: Created: latency-svc-tkqph Apr 26 22:23:02.533: INFO: Got endpoints: latency-svc-tkqph [908.96185ms] Apr 26 22:23:02.548: INFO: Created: latency-svc-4cjkv Apr 26 22:23:02.562: INFO: Got endpoints: latency-svc-4cjkv [901.466178ms] Apr 26 22:23:02.579: INFO: Created: latency-svc-rxp42 Apr 26 22:23:02.592: INFO: Got endpoints: latency-svc-rxp42 [901.577526ms] Apr 26 22:23:02.610: INFO: Created: latency-svc-t4lp5 Apr 26 22:23:02.629: INFO: Got endpoints: latency-svc-t4lp5 [866.03618ms] Apr 26 22:23:02.690: INFO: Created: latency-svc-5648h Apr 26 22:23:02.711: INFO: Got endpoints: latency-svc-5648h [904.996934ms] Apr 26 22:23:02.740: INFO: Created: latency-svc-8svmq Apr 26 22:23:02.756: INFO: Got endpoints: latency-svc-8svmq [838.340498ms] Apr 26 22:23:02.827: INFO: Created: latency-svc-grdzf Apr 26 22:23:02.862: INFO: Created: latency-svc-7dqr4 Apr 26 22:23:02.862: INFO: Got endpoints: latency-svc-grdzf [899.739386ms] Apr 26 22:23:02.875: INFO: Got endpoints: latency-svc-7dqr4 [865.452647ms] Apr 26 22:23:02.896: INFO: Created: latency-svc-mjttq Apr 26 22:23:02.912: INFO: Got endpoints: latency-svc-mjttq [752.163799ms] Apr 26 22:23:02.971: INFO: Created: latency-svc-d52kn Apr 26 22:23:02.980: INFO: Got endpoints: latency-svc-d52kn [768.095871ms] Apr 26 22:23:03.024: INFO: Created: latency-svc-blj2x Apr 26 22:23:03.038: INFO: Got endpoints: latency-svc-blj2x [771.385022ms] Apr 26 22:23:03.061: INFO: Created: latency-svc-cdglw Apr 26 22:23:03.069: INFO: Got endpoints: latency-svc-cdglw [771.62126ms] Apr 26 22:23:03.108: INFO: Created: latency-svc-9ms8k Apr 26 22:23:03.116: INFO: Got endpoints: latency-svc-9ms8k [777.503765ms] Apr 26 22:23:03.141: INFO: Created: latency-svc-6mgfk Apr 26 22:23:03.171: INFO: Got endpoints: latency-svc-6mgfk [778.014754ms] Apr 26 22:23:03.202: INFO: Created: latency-svc-8j2dq Apr 26 22:23:03.240: INFO: Got endpoints: latency-svc-8j2dq [816.805342ms] Apr 26 22:23:03.253: INFO: Created: latency-svc-jxhnx Apr 26 22:23:03.268: INFO: Got endpoints: latency-svc-jxhnx [734.163202ms] Apr 26 22:23:03.288: INFO: Created: latency-svc-ngbgh Apr 26 22:23:03.298: INFO: Got endpoints: latency-svc-ngbgh [735.661981ms] Apr 26 22:23:03.298: INFO: Latencies: [53.205129ms 169.145563ms 226.760262ms 313.789623ms 375.360212ms 453.63414ms 497.547773ms 533.319283ms 624.922109ms 636.751564ms 683.480628ms 694.379088ms 701.026857ms 707.241066ms 712.507223ms 712.578379ms 717.523761ms 718.310645ms 721.666267ms 723.882426ms 724.011884ms 729.34732ms 730.130449ms 734.163202ms 735.661981ms 735.803376ms 736.504122ms 739.402972ms 740.886232ms 742.083065ms 742.603339ms 746.314427ms 746.974654ms 747.766554ms 748.99487ms 752.163799ms 752.987733ms 753.640535ms 754.472615ms 759.553667ms 759.695665ms 759.882139ms 759.970609ms 760.000863ms 760.088108ms 760.557832ms 761.403112ms 765.907357ms 765.916212ms 766.320859ms 766.397042ms 766.700431ms 768.095871ms 770.269757ms 771.00127ms 771.385022ms 771.62126ms 775.331071ms 777.301893ms 777.503765ms 777.578095ms 777.579327ms 778.014754ms 779.40766ms 781.344238ms 782.206716ms 783.317292ms 784.742809ms 787.776557ms 790.996706ms 792.415146ms 794.567536ms 795.799417ms 795.964462ms 796.025203ms 796.649749ms 800.432945ms 801.542424ms 802.942061ms 807.394973ms 807.797322ms 808.059213ms 811.005428ms 813.439702ms 814.109864ms 814.170021ms 814.305786ms 816.805342ms 818.719531ms 819.765874ms 822.355691ms 823.351507ms 825.775362ms 826.471978ms 827.632497ms 829.609352ms 831.527917ms 831.681055ms 831.803348ms 833.317008ms 835.288112ms 838.340498ms 840.899713ms 844.454163ms 845.493755ms 845.749861ms 846.549937ms 848.822502ms 851.276051ms 851.950495ms 855.450065ms 861.902123ms 864.880561ms 865.452647ms 866.03618ms 868.54307ms 874.111294ms 879.211935ms 879.325997ms 880.558645ms 885.83979ms 886.56902ms 889.947712ms 892.160208ms 893.182851ms 894.996153ms 899.739386ms 899.872299ms 901.409103ms 901.466178ms 901.577526ms 902.216184ms 904.996934ms 908.96185ms 909.161951ms 909.558883ms 910.646131ms 919.393215ms 922.386514ms 924.284076ms 938.152842ms 938.736688ms 939.647615ms 948.320725ms 948.979215ms 957.680783ms 966.762441ms 968.018189ms 972.829284ms 972.847358ms 975.160945ms 976.773174ms 979.248819ms 980.217708ms 985.646387ms 997.548146ms 1.001451419s 1.002530751s 1.009813412s 1.015727793s 1.02069264s 1.022757942s 1.029450488s 1.03296914s 1.035108475s 1.038336403s 1.041477328s 1.048960483s 1.054260381s 1.055325167s 1.056711722s 1.06099976s 1.067299683s 1.067717457s 1.071321221s 1.073149293s 1.089465299s 1.095002511s 1.095364475s 1.096184569s 1.102262587s 1.104901432s 1.107583114s 1.112811317s 1.115968798s 1.121196786s 1.127672255s 1.130582499s 1.13677427s 1.137422966s 1.139941173s 1.147551456s 1.148365767s 1.14926263s 1.150495497s 1.16175693s 1.162763646s 1.163620242s 1.17487398s 1.199501712s] Apr 26 22:23:03.298: INFO: 50 %ile: 835.288112ms Apr 26 22:23:03.298: INFO: 90 %ile: 1.102262587s Apr 26 22:23:03.298: INFO: 99 %ile: 1.17487398s Apr 26 22:23:03.298: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:23:03.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6457" for this suite. • [SLOW TEST:15.140 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":270,"skipped":4428,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:23:03.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:23:19.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1406" for this suite. • [SLOW TEST:16.422 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":271,"skipped":4430,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:23:19.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Apr 26 22:23:19.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5812' Apr 26 22:23:20.334: INFO: stderr: "" Apr 26 22:23:20.334: INFO: stdout: "pod/pause created\n" Apr 26 22:23:20.334: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 26 22:23:20.334: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5812" to be "running and ready" Apr 26 22:23:20.362: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 28.178358ms Apr 26 22:23:22.379: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044662599s Apr 26 22:23:24.389: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.055214482s Apr 26 22:23:24.389: INFO: Pod "pause" satisfied condition "running and ready" Apr 26 22:23:24.389: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Apr 26 22:23:24.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5812' Apr 26 22:23:24.513: INFO: stderr: "" Apr 26 22:23:24.513: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 26 22:23:24.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5812' Apr 26 22:23:24.624: INFO: stderr: "" Apr 26 22:23:24.624: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 26 22:23:24.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5812' Apr 26 22:23:24.755: INFO: stderr: "" Apr 26 22:23:24.755: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 26 22:23:24.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5812' Apr 26 22:23:24.893: INFO: stderr: "" Apr 26 22:23:24.893: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Apr 26 22:23:24.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5812' Apr 26 22:23:25.072: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 26 22:23:25.072: INFO: stdout: "pod \"pause\" force deleted\n" Apr 26 22:23:25.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5812' Apr 26 22:23:25.446: INFO: stderr: "No resources found in kubectl-5812 namespace.\n" Apr 26 22:23:25.446: INFO: stdout: "" Apr 26 22:23:25.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5812 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 26 22:23:25.582: INFO: stderr: "" Apr 26 22:23:25.582: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:23:25.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5812" for this suite. • [SLOW TEST:6.006 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":272,"skipped":4437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:23:25.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 26 22:23:30.371: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:23:30.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5002" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:23:30.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:23:30.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3231" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":274,"skipped":4507,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:23:30.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 26 22:23:30.676: INFO: Waiting up to 5m0s for pod "downward-api-a336beb3-ca3b-41b0-8f64-6e9645f21770" in namespace "downward-api-744" to be "success or failure" Apr 26 22:23:30.679: INFO: Pod "downward-api-a336beb3-ca3b-41b0-8f64-6e9645f21770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.914072ms Apr 26 22:23:32.683: INFO: Pod "downward-api-a336beb3-ca3b-41b0-8f64-6e9645f21770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007279507s Apr 26 22:23:34.687: INFO: Pod "downward-api-a336beb3-ca3b-41b0-8f64-6e9645f21770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011123363s STEP: Saw pod success Apr 26 22:23:34.687: INFO: Pod "downward-api-a336beb3-ca3b-41b0-8f64-6e9645f21770" satisfied condition "success or failure" Apr 26 22:23:34.690: INFO: Trying to get logs from node jerma-worker pod downward-api-a336beb3-ca3b-41b0-8f64-6e9645f21770 container dapi-container: STEP: delete the pod Apr 26 22:23:34.722: INFO: Waiting for pod downward-api-a336beb3-ca3b-41b0-8f64-6e9645f21770 to disappear Apr 26 22:23:34.786: INFO: Pod downward-api-a336beb3-ca3b-41b0-8f64-6e9645f21770 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:23:34.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-744" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4521,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:23:34.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-qvxb8 in namespace proxy-465 I0426 22:23:34.912754 6 runners.go:189] Created replication controller with name: proxy-service-qvxb8, namespace: proxy-465, replica count: 1 I0426 22:23:35.963164 6 runners.go:189] proxy-service-qvxb8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 22:23:36.963382 6 runners.go:189] proxy-service-qvxb8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0426 22:23:37.963606 6 runners.go:189] proxy-service-qvxb8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0426 22:23:38.963851 6 runners.go:189] proxy-service-qvxb8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0426 22:23:39.964053 6 runners.go:189] proxy-service-qvxb8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0426 22:23:40.964248 6 runners.go:189] proxy-service-qvxb8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0426 22:23:41.964449 6 runners.go:189] proxy-service-qvxb8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 26 22:23:41.968: INFO: setup took 7.142910239s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 26 22:23:41.975: INFO: (0) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799/proxy/: test (200; 6.594737ms) Apr 26 22:23:41.975: INFO: (0) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 7.386355ms) Apr 26 22:23:41.978: INFO: (0) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 9.585277ms) Apr 26 22:23:41.983: INFO: (0) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 14.456255ms) Apr 26 22:23:41.983: INFO: (0) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 14.347727ms) Apr 26 22:23:41.984: INFO: (0) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname2/proxy/: bar (200; 16.160904ms) Apr 26 22:23:41.984: INFO: (0) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 16.189688ms) Apr 26 22:23:41.985: INFO: (0) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:1080/proxy/: t... (200; 16.567484ms) Apr 26 22:23:41.985: INFO: (0) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname1/proxy/: foo (200; 17.477166ms) Apr 26 22:23:41.986: INFO: (0) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 17.826765ms) Apr 26 22:23:41.986: INFO: (0) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:1080/proxy/: testtest (200; 5.514749ms) Apr 26 22:23:41.997: INFO: (1) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 5.61471ms) Apr 26 22:23:41.997: INFO: (1) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 5.543133ms) Apr 26 22:23:41.997: INFO: (1) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:460/proxy/: tls baz (200; 5.574035ms) Apr 26 22:23:41.997: INFO: (1) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:1080/proxy/: t... (200; 5.613978ms) Apr 26 22:23:41.998: INFO: (1) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 6.607125ms) Apr 26 22:23:41.998: INFO: (1) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 6.846356ms) Apr 26 22:23:41.999: INFO: (1) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 7.042164ms) Apr 26 22:23:41.999: INFO: (1) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname2/proxy/: bar (200; 7.054537ms) Apr 26 22:23:41.999: INFO: (1) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 7.083907ms) Apr 26 22:23:41.999: INFO: (1) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname1/proxy/: foo (200; 7.249481ms) Apr 26 22:23:41.999: INFO: (1) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:462/proxy/: tls qux (200; 7.189983ms) Apr 26 22:23:41.999: INFO: (1) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 7.432973ms) Apr 26 22:23:41.999: INFO: (1) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:1080/proxy/: testt... (200; 6.643793ms) Apr 26 22:23:42.006: INFO: (2) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: testtest (200; 6.870503ms) Apr 26 22:23:42.006: INFO: (2) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:460/proxy/: tls baz (200; 7.000517ms) Apr 26 22:23:42.022: INFO: (2) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname2/proxy/: bar (200; 22.200302ms) Apr 26 22:23:42.022: INFO: (2) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 22.292988ms) Apr 26 22:23:42.022: INFO: (2) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 22.454489ms) Apr 26 22:23:42.022: INFO: (2) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 22.488993ms) Apr 26 22:23:42.022: INFO: (2) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 22.972692ms) Apr 26 22:23:42.046: INFO: (3) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 23.477267ms) Apr 26 22:23:42.049: INFO: (3) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 26.211058ms) Apr 26 22:23:42.049: INFO: (3) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:1080/proxy/: testt... (200; 26.718465ms) Apr 26 22:23:42.049: INFO: (3) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 26.828603ms) Apr 26 22:23:42.049: INFO: (3) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799/proxy/: test (200; 26.726081ms) Apr 26 22:23:42.049: INFO: (3) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 26.759941ms) Apr 26 22:23:42.050: INFO: (3) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 27.122612ms) Apr 26 22:23:42.050: INFO: (3) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 27.534249ms) Apr 26 22:23:42.050: INFO: (3) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname1/proxy/: foo (200; 27.606776ms) Apr 26 22:23:42.050: INFO: (3) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 27.483872ms) Apr 26 22:23:42.063: INFO: (3) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: test (200; 12.285257ms) Apr 26 22:23:42.086: INFO: (4) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 11.631012ms) Apr 26 22:23:42.086: INFO: (4) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 11.553411ms) Apr 26 22:23:42.086: INFO: (4) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:1080/proxy/: testt... (200; 11.709272ms) Apr 26 22:23:42.086: INFO: (4) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 12.79585ms) Apr 26 22:23:42.087: INFO: (4) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 12.956245ms) Apr 26 22:23:42.087: INFO: (4) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:462/proxy/: tls qux (200; 12.765901ms) Apr 26 22:23:42.087: INFO: (4) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 13.165989ms) Apr 26 22:23:42.087: INFO: (4) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname1/proxy/: foo (200; 12.880374ms) Apr 26 22:23:42.087: INFO: (4) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: t... (200; 24.40282ms) Apr 26 22:23:42.116: INFO: (5) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 24.586911ms) Apr 26 22:23:42.116: INFO: (5) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:1080/proxy/: testtest (200; 24.77437ms) Apr 26 22:23:42.116: INFO: (5) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:460/proxy/: tls baz (200; 24.77572ms) Apr 26 22:23:42.116: INFO: (5) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: test (200; 4.945272ms) Apr 26 22:23:42.124: INFO: (6) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 5.300527ms) Apr 26 22:23:42.125: INFO: (6) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 6.640004ms) Apr 26 22:23:42.126: INFO: (6) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 7.117279ms) Apr 26 22:23:42.126: INFO: (6) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 7.0596ms) Apr 26 22:23:42.126: INFO: (6) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 7.031172ms) Apr 26 22:23:42.126: INFO: (6) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:1080/proxy/: t... (200; 7.067607ms) Apr 26 22:23:42.126: INFO: (6) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 7.057641ms) Apr 26 22:23:42.126: INFO: (6) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:1080/proxy/: testtestt... (200; 3.853376ms) Apr 26 22:23:42.130: INFO: (7) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 3.90423ms) Apr 26 22:23:42.130: INFO: (7) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 3.904544ms) Apr 26 22:23:42.130: INFO: (7) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 3.77037ms) Apr 26 22:23:42.130: INFO: (7) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 3.955059ms) Apr 26 22:23:42.130: INFO: (7) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:462/proxy/: tls qux (200; 4.006665ms) Apr 26 22:23:42.130: INFO: (7) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 3.995019ms) Apr 26 22:23:42.130: INFO: (7) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799/proxy/: test (200; 3.928631ms) Apr 26 22:23:42.130: INFO: (7) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 3.918921ms) Apr 26 22:23:42.131: INFO: (7) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: test (200; 2.735741ms) Apr 26 22:23:42.134: INFO: (8) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 2.736867ms) Apr 26 22:23:42.135: INFO: (8) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:1080/proxy/: t... (200; 3.22055ms) Apr 26 22:23:42.135: INFO: (8) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 3.534119ms) Apr 26 22:23:42.135: INFO: (8) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: testtest (200; 3.102879ms) Apr 26 22:23:42.140: INFO: (9) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 3.180558ms) Apr 26 22:23:42.140: INFO: (9) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 3.165527ms) Apr 26 22:23:42.140: INFO: (9) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:1080/proxy/: t... (200; 3.227428ms) Apr 26 22:23:42.140: INFO: (9) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:462/proxy/: tls qux (200; 3.266726ms) Apr 26 22:23:42.140: INFO: (9) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:1080/proxy/: testtest (200; 5.444771ms) Apr 26 22:23:42.147: INFO: (10) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 5.36973ms) Apr 26 22:23:42.147: INFO: (10) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname1/proxy/: foo (200; 5.698452ms) Apr 26 22:23:42.147: INFO: (10) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname2/proxy/: bar (200; 6.01512ms) Apr 26 22:23:42.147: INFO: (10) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:460/proxy/: tls baz (200; 6.070072ms) Apr 26 22:23:42.147: INFO: (10) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 6.003341ms) Apr 26 22:23:42.147: INFO: (10) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 6.021854ms) Apr 26 22:23:42.147: INFO: (10) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 6.091836ms) Apr 26 22:23:42.147: INFO: (10) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: testt... (200; 6.354162ms) Apr 26 22:23:42.148: INFO: (10) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 6.46096ms) Apr 26 22:23:42.151: INFO: (11) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: testt... (200; 3.15042ms) Apr 26 22:23:42.151: INFO: (11) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 3.16968ms) Apr 26 22:23:42.151: INFO: (11) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799/proxy/: test (200; 3.299088ms) Apr 26 22:23:42.151: INFO: (11) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:462/proxy/: tls qux (200; 3.321218ms) Apr 26 22:23:42.151: INFO: (11) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:460/proxy/: tls baz (200; 3.412908ms) Apr 26 22:23:42.151: INFO: (11) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 3.415171ms) Apr 26 22:23:42.151: INFO: (11) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname2/proxy/: bar (200; 3.812677ms) Apr 26 22:23:42.152: INFO: (11) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 4.079734ms) Apr 26 22:23:42.152: INFO: (11) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 4.08093ms) Apr 26 22:23:42.152: INFO: (11) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 4.078237ms) Apr 26 22:23:42.155: INFO: (12) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 3.533467ms) Apr 26 22:23:42.155: INFO: (12) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 3.630775ms) Apr 26 22:23:42.155: INFO: (12) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799/proxy/: test (200; 3.57324ms) Apr 26 22:23:42.155: INFO: (12) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:1080/proxy/: testt... (200; 3.78516ms) Apr 26 22:23:42.156: INFO: (12) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 3.827718ms) Apr 26 22:23:42.156: INFO: (12) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:462/proxy/: tls qux (200; 3.799111ms) Apr 26 22:23:42.156: INFO: (12) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 3.843496ms) Apr 26 22:23:42.156: INFO: (12) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:460/proxy/: tls baz (200; 3.791944ms) Apr 26 22:23:42.156: INFO: (12) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 3.688872ms) Apr 26 22:23:42.156: INFO: (12) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 3.889525ms) Apr 26 22:23:42.156: INFO: (12) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 4.42784ms) Apr 26 22:23:42.156: INFO: (12) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 4.479846ms) Apr 26 22:23:42.159: INFO: (13) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 2.795429ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:460/proxy/: tls baz (200; 3.231552ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799/proxy/: test (200; 3.15526ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: testt... (200; 3.543729ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 3.535485ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:462/proxy/: tls qux (200; 3.609931ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname1/proxy/: foo (200; 3.6869ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 3.776953ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 3.814277ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 3.943588ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 3.933093ms) Apr 26 22:23:42.160: INFO: (13) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname2/proxy/: bar (200; 4.017387ms) Apr 26 22:23:42.163: INFO: (14) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 2.118601ms) Apr 26 22:23:42.163: INFO: (14) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 2.200128ms) Apr 26 22:23:42.163: INFO: (14) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:1080/proxy/: testt... (200; 2.852771ms) Apr 26 22:23:42.163: INFO: (14) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 2.892983ms) Apr 26 22:23:42.163: INFO: (14) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: test (200; 3.350586ms) Apr 26 22:23:42.164: INFO: (14) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 3.573138ms) Apr 26 22:23:42.164: INFO: (14) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 3.692382ms) Apr 26 22:23:42.164: INFO: (14) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 3.75976ms) Apr 26 22:23:42.165: INFO: (14) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname1/proxy/: foo (200; 4.163283ms) Apr 26 22:23:42.168: INFO: (15) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:1080/proxy/: t... (200; 2.907168ms) Apr 26 22:23:42.168: INFO: (15) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799/proxy/: test (200; 3.053196ms) Apr 26 22:23:42.168: INFO: (15) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname2/proxy/: bar (200; 3.392624ms) Apr 26 22:23:42.168: INFO: (15) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 3.682157ms) Apr 26 22:23:42.168: INFO: (15) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 3.668447ms) Apr 26 22:23:42.169: INFO: (15) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 3.681237ms) Apr 26 22:23:42.169: INFO: (15) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: testtest (200; 3.377243ms) Apr 26 22:23:42.173: INFO: (16) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:1080/proxy/: testt... (200; 3.369683ms) Apr 26 22:23:42.173: INFO: (16) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: testt... (200; 4.168286ms) Apr 26 22:23:42.178: INFO: (17) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 4.440682ms) Apr 26 22:23:42.178: INFO: (17) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: test (200; 4.472891ms) Apr 26 22:23:42.178: INFO: (17) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:160/proxy/: foo (200; 4.606674ms) Apr 26 22:23:42.179: INFO: (17) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname2/proxy/: bar (200; 5.565237ms) Apr 26 22:23:42.179: INFO: (17) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname1/proxy/: foo (200; 5.661066ms) Apr 26 22:23:42.179: INFO: (17) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 5.652787ms) Apr 26 22:23:42.179: INFO: (17) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 5.743532ms) Apr 26 22:23:42.179: INFO: (17) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 5.744257ms) Apr 26 22:23:42.179: INFO: (17) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 5.71746ms) Apr 26 22:23:42.183: INFO: (18) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:1080/proxy/: t... (200; 3.485234ms) Apr 26 22:23:42.183: INFO: (18) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:460/proxy/: tls baz (200; 3.375743ms) Apr 26 22:23:42.183: INFO: (18) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 3.554862ms) Apr 26 22:23:42.183: INFO: (18) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: testtest (200; 3.546144ms) Apr 26 22:23:42.183: INFO: (18) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname1/proxy/: foo (200; 3.940099ms) Apr 26 22:23:42.183: INFO: (18) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname2/proxy/: bar (200; 4.042276ms) Apr 26 22:23:42.184: INFO: (18) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 4.1176ms) Apr 26 22:23:42.184: INFO: (18) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 4.105608ms) Apr 26 22:23:42.184: INFO: (18) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 4.389467ms) Apr 26 22:23:42.184: INFO: (18) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 4.421379ms) Apr 26 22:23:42.188: INFO: (19) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:443/proxy/: testtest (200; 4.242988ms) Apr 26 22:23:42.188: INFO: (19) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:162/proxy/: bar (200; 4.280852ms) Apr 26 22:23:42.188: INFO: (19) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname1/proxy/: foo (200; 4.367045ms) Apr 26 22:23:42.188: INFO: (19) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:1080/proxy/: t... (200; 4.30896ms) Apr 26 22:23:42.188: INFO: (19) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:462/proxy/: tls qux (200; 4.359294ms) Apr 26 22:23:42.188: INFO: (19) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname2/proxy/: bar (200; 4.315651ms) Apr 26 22:23:42.189: INFO: (19) /api/v1/namespaces/proxy-465/pods/https:proxy-service-qvxb8-c2799:460/proxy/: tls baz (200; 4.534713ms) Apr 26 22:23:42.189: INFO: (19) /api/v1/namespaces/proxy-465/pods/http:proxy-service-qvxb8-c2799:162/proxy/: bar (200; 4.58551ms) Apr 26 22:23:42.189: INFO: (19) /api/v1/namespaces/proxy-465/pods/proxy-service-qvxb8-c2799:160/proxy/: foo (200; 4.731638ms) Apr 26 22:23:42.189: INFO: (19) /api/v1/namespaces/proxy-465/services/http:proxy-service-qvxb8:portname1/proxy/: foo (200; 4.845891ms) Apr 26 22:23:42.189: INFO: (19) /api/v1/namespaces/proxy-465/services/proxy-service-qvxb8:portname2/proxy/: bar (200; 5.208107ms) Apr 26 22:23:42.189: INFO: (19) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname2/proxy/: tls qux (200; 5.212694ms) Apr 26 22:23:42.189: INFO: (19) /api/v1/namespaces/proxy-465/services/https:proxy-service-qvxb8:tlsportname1/proxy/: tls baz (200; 5.270035ms) STEP: deleting ReplicationController proxy-service-qvxb8 in namespace proxy-465, will wait for the garbage collector to delete the pods Apr 26 22:23:42.246: INFO: Deleting ReplicationController proxy-service-qvxb8 took: 5.176263ms Apr 26 22:23:42.546: INFO: Terminating ReplicationController proxy-service-qvxb8 pods took: 300.275047ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:23:49.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-465" for this suite. • [SLOW TEST:14.761 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":276,"skipped":4524,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:23:49.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:24:05.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8126" for this suite. • [SLOW TEST:16.246 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":277,"skipped":4533,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 26 22:24:05.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0ca0d725-2843-4e91-ad02-cc6ab2a10b5b STEP: Creating a pod to test consume secrets Apr 26 22:24:05.893: INFO: Waiting up to 5m0s for pod "pod-secrets-0b646bac-e271-47e6-a747-c97cb0a1722e" in namespace "secrets-2113" to be "success or failure" Apr 26 22:24:05.896: INFO: Pod "pod-secrets-0b646bac-e271-47e6-a747-c97cb0a1722e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.881265ms Apr 26 22:24:07.900: INFO: Pod "pod-secrets-0b646bac-e271-47e6-a747-c97cb0a1722e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007193177s Apr 26 22:24:09.904: INFO: Pod "pod-secrets-0b646bac-e271-47e6-a747-c97cb0a1722e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011512039s STEP: Saw pod success Apr 26 22:24:09.904: INFO: Pod "pod-secrets-0b646bac-e271-47e6-a747-c97cb0a1722e" satisfied condition "success or failure" Apr 26 22:24:09.907: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-0b646bac-e271-47e6-a747-c97cb0a1722e container secret-env-test: STEP: delete the pod Apr 26 22:24:09.948: INFO: Waiting for pod pod-secrets-0b646bac-e271-47e6-a747-c97cb0a1722e to disappear Apr 26 22:24:09.962: INFO: Pod pod-secrets-0b646bac-e271-47e6-a747-c97cb0a1722e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 26 22:24:09.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2113" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSApr 26 22:24:09.970: INFO: Running AfterSuite actions on all nodes Apr 26 22:24:09.970: INFO: Running AfterSuite actions on node 1 Apr 26 22:24:09.970: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4628.032 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS